00:00:00.000 Started by upstream project "autotest-per-patch" build number 132302 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.013 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.014 The recommended git tool is: git 00:00:00.014 using credential 00000000-0000-0000-0000-000000000002 00:00:00.017 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.034 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.060 Using shallow fetch with depth 1 00:00:00.060 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.060 > git --version # timeout=10 00:00:00.088 > git --version # 'git version 2.39.2' 00:00:00.088 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.140 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.140 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.125 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.137 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.148 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.148 > git config core.sparsecheckout # timeout=10 00:00:03.159 > git read-tree -mu HEAD # timeout=10 00:00:03.174 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.191 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.191 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.287 [Pipeline] Start of Pipeline 00:00:03.301 [Pipeline] library 00:00:03.302 Loading library shm_lib@master 00:00:03.302 Library shm_lib@master is cached. Copying from home. 00:00:03.314 [Pipeline] node 00:00:03.322 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.323 [Pipeline] { 00:00:03.333 [Pipeline] catchError 00:00:03.335 [Pipeline] { 00:00:03.347 [Pipeline] wrap 00:00:03.357 [Pipeline] { 00:00:03.365 [Pipeline] stage 00:00:03.366 [Pipeline] { (Prologue) 00:00:03.385 [Pipeline] echo 00:00:03.387 Node: VM-host-SM17 00:00:03.396 [Pipeline] cleanWs 00:00:03.404 [WS-CLEANUP] Deleting project workspace... 00:00:03.404 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.409 [WS-CLEANUP] done 00:00:03.586 [Pipeline] setCustomBuildProperty 00:00:03.683 [Pipeline] httpRequest 00:00:05.176 [Pipeline] echo 00:00:05.177 Sorcerer 10.211.164.101 is alive 00:00:05.186 [Pipeline] retry 00:00:05.187 [Pipeline] { 00:00:05.200 [Pipeline] httpRequest 00:00:05.204 HttpMethod: GET 00:00:05.205 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.205 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.216 Response Code: HTTP/1.1 200 OK 00:00:05.217 Success: Status code 200 is in the accepted range: 200,404 00:00:05.217 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.210 [Pipeline] } 00:00:08.226 [Pipeline] // retry 00:00:08.233 [Pipeline] sh 00:00:08.511 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.527 [Pipeline] httpRequest 00:00:10.381 [Pipeline] echo 00:00:10.383 Sorcerer 10.211.164.101 is alive 00:00:10.394 [Pipeline] retry 00:00:10.396 [Pipeline] { 00:00:10.412 [Pipeline] httpRequest 00:00:10.417 HttpMethod: GET 00:00:10.419 URL: http://10.211.164.101/packages/spdk_514198259dd4c8bcbf912c664217c4907cf2b670.tar.gz 00:00:10.420 Sending request to url: http://10.211.164.101/packages/spdk_514198259dd4c8bcbf912c664217c4907cf2b670.tar.gz 00:00:10.442 Response Code: HTTP/1.1 200 OK 00:00:10.443 Success: Status code 200 is in the accepted range: 200,404 00:00:10.443 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_514198259dd4c8bcbf912c664217c4907cf2b670.tar.gz 00:01:25.952 [Pipeline] } 00:01:25.969 [Pipeline] // retry 00:01:25.977 [Pipeline] sh 00:01:26.255 + tar --no-same-owner -xf spdk_514198259dd4c8bcbf912c664217c4907cf2b670.tar.gz 00:01:29.629 [Pipeline] sh 00:01:29.908 + git -C spdk log --oneline -n5 00:01:29.908 514198259 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:29.908 59da1a1d7 nvmf: Expose DIF type of namespace to host again 00:01:29.908 9a34ab7f7 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:29.908 b0a35519c nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:29.908 dec6d3843 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:29.926 [Pipeline] writeFile 00:01:29.942 [Pipeline] sh 00:01:30.223 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.234 [Pipeline] sh 00:01:30.513 + cat autorun-spdk.conf 00:01:30.513 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.513 SPDK_RUN_ASAN=1 00:01:30.513 SPDK_RUN_UBSAN=1 00:01:30.513 SPDK_TEST_RAID=1 00:01:30.513 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.519 RUN_NIGHTLY=0 00:01:30.521 [Pipeline] } 00:01:30.535 [Pipeline] // stage 00:01:30.549 [Pipeline] stage 00:01:30.551 [Pipeline] { (Run VM) 00:01:30.564 [Pipeline] sh 00:01:30.843 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:30.843 + echo 'Start stage prepare_nvme.sh' 00:01:30.843 Start stage prepare_nvme.sh 00:01:30.843 + [[ -n 0 ]] 00:01:30.843 + disk_prefix=ex0 00:01:30.843 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:30.843 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:30.843 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:30.843 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.843 ++ SPDK_RUN_ASAN=1 00:01:30.843 ++ SPDK_RUN_UBSAN=1 00:01:30.843 ++ SPDK_TEST_RAID=1 00:01:30.843 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.843 ++ RUN_NIGHTLY=0 00:01:30.843 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:30.843 + nvme_files=() 00:01:30.843 + declare -A nvme_files 00:01:30.843 + backend_dir=/var/lib/libvirt/images/backends 00:01:30.843 + nvme_files['nvme.img']=5G 00:01:30.843 + nvme_files['nvme-cmb.img']=5G 00:01:30.843 + nvme_files['nvme-multi0.img']=4G 00:01:30.843 + nvme_files['nvme-multi1.img']=4G 00:01:30.843 + nvme_files['nvme-multi2.img']=4G 00:01:30.843 + nvme_files['nvme-openstack.img']=8G 00:01:30.843 + nvme_files['nvme-zns.img']=5G 00:01:30.843 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:30.843 + (( SPDK_TEST_FTL == 1 )) 00:01:30.843 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:30.843 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:30.843 + for nvme in "${!nvme_files[@]}" 00:01:30.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:30.843 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.843 + for nvme in "${!nvme_files[@]}" 00:01:30.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:30.843 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.843 + for nvme in "${!nvme_files[@]}" 00:01:30.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:30.843 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:30.843 + for nvme in "${!nvme_files[@]}" 00:01:30.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:30.843 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.843 + for nvme in "${!nvme_files[@]}" 00:01:30.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:30.843 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.843 + for nvme in "${!nvme_files[@]}" 00:01:30.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:30.844 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.844 + for nvme in "${!nvme_files[@]}" 00:01:30.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:30.844 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.844 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:30.844 + echo 'End stage prepare_nvme.sh' 00:01:30.844 End stage prepare_nvme.sh 00:01:30.854 [Pipeline] sh 00:01:31.134 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:31.134 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:31.134 00:01:31.134 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:31.134 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:31.134 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:31.134 HELP=0 00:01:31.134 DRY_RUN=0 00:01:31.134 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:31.134 NVME_DISKS_TYPE=nvme,nvme, 00:01:31.134 NVME_AUTO_CREATE=0 00:01:31.134 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:31.134 NVME_CMB=,, 00:01:31.134 NVME_PMR=,, 00:01:31.134 NVME_ZNS=,, 00:01:31.134 NVME_MS=,, 00:01:31.134 NVME_FDP=,, 00:01:31.134 SPDK_VAGRANT_DISTRO=fedora39 00:01:31.134 SPDK_VAGRANT_VMCPU=10 00:01:31.134 SPDK_VAGRANT_VMRAM=12288 00:01:31.134 SPDK_VAGRANT_PROVIDER=libvirt 00:01:31.134 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:31.134 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:31.134 SPDK_OPENSTACK_NETWORK=0 00:01:31.134 VAGRANT_PACKAGE_BOX=0 00:01:31.134 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:31.134 FORCE_DISTRO=true 00:01:31.134 VAGRANT_BOX_VERSION= 00:01:31.134 EXTRA_VAGRANTFILES= 00:01:31.134 NIC_MODEL=e1000 00:01:31.134 00:01:31.134 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:31.134 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:34.418 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.985 ==> default: Creating image (snapshot of base box volume). 00:01:35.245 ==> default: Creating domain with the following settings... 00:01:35.245 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731669197_3ab74d569ae208a9a7e2 00:01:35.245 ==> default: -- Domain type: kvm 00:01:35.245 ==> default: -- Cpus: 10 00:01:35.245 ==> default: -- Feature: acpi 00:01:35.245 ==> default: -- Feature: apic 00:01:35.245 ==> default: -- Feature: pae 00:01:35.245 ==> default: -- Memory: 12288M 00:01:35.245 ==> default: -- Memory Backing: hugepages: 00:01:35.245 ==> default: -- Management MAC: 00:01:35.245 ==> default: -- Loader: 00:01:35.245 ==> default: -- Nvram: 00:01:35.245 ==> default: -- Base box: spdk/fedora39 00:01:35.245 ==> default: -- Storage pool: default 00:01:35.245 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731669197_3ab74d569ae208a9a7e2.img (20G) 00:01:35.245 ==> default: -- Volume Cache: default 00:01:35.245 ==> default: -- Kernel: 00:01:35.245 ==> default: -- Initrd: 00:01:35.245 ==> default: -- Graphics Type: vnc 00:01:35.245 ==> default: -- Graphics Port: -1 00:01:35.245 ==> default: -- Graphics IP: 127.0.0.1 00:01:35.245 ==> default: -- Graphics Password: Not defined 00:01:35.245 ==> default: -- Video Type: cirrus 00:01:35.245 ==> default: -- Video VRAM: 9216 00:01:35.245 ==> default: -- Sound Type: 00:01:35.245 ==> default: -- Keymap: en-us 00:01:35.245 ==> default: -- TPM Path: 00:01:35.245 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:35.245 ==> default: -- Command line args: 00:01:35.245 ==> default: -> value=-device, 00:01:35.245 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:35.245 ==> default: -> value=-drive, 00:01:35.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:35.245 ==> default: -> value=-device, 00:01:35.245 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.245 ==> default: -> value=-device, 00:01:35.245 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:35.245 ==> default: -> value=-drive, 00:01:35.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:35.245 ==> default: -> value=-device, 00:01:35.245 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.245 ==> default: -> value=-drive, 00:01:35.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:35.245 ==> default: -> value=-device, 00:01:35.245 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.245 ==> default: -> value=-drive, 00:01:35.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:35.245 ==> default: -> value=-device, 00:01:35.245 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.504 ==> default: Creating shared folders metadata... 00:01:35.504 ==> default: Starting domain. 00:01:37.408 ==> default: Waiting for domain to get an IP address... 00:01:52.394 ==> default: Waiting for SSH to become available... 00:01:53.772 ==> default: Configuring and enabling network interfaces... 00:01:57.963 default: SSH address: 192.168.121.5:22 00:01:57.963 default: SSH username: vagrant 00:01:57.963 default: SSH auth method: private key 00:02:00.499 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:08.616 ==> default: Mounting SSHFS shared folder... 00:02:09.549 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:09.549 ==> default: Checking Mount.. 00:02:10.923 ==> default: Folder Successfully Mounted! 00:02:10.923 ==> default: Running provisioner: file... 00:02:11.489 default: ~/.gitconfig => .gitconfig 00:02:12.055 00:02:12.055 SUCCESS! 00:02:12.055 00:02:12.055 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:12.055 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:12.055 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:12.055 00:02:12.064 [Pipeline] } 00:02:12.079 [Pipeline] // stage 00:02:12.089 [Pipeline] dir 00:02:12.090 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:12.092 [Pipeline] { 00:02:12.104 [Pipeline] catchError 00:02:12.106 [Pipeline] { 00:02:12.119 [Pipeline] sh 00:02:12.397 + vagrant ssh-config --host vagrant 00:02:12.397 + sed -ne /^Host/,$p 00:02:12.397 + tee ssh_conf 00:02:15.740 Host vagrant 00:02:15.740 HostName 192.168.121.5 00:02:15.740 User vagrant 00:02:15.740 Port 22 00:02:15.740 UserKnownHostsFile /dev/null 00:02:15.740 StrictHostKeyChecking no 00:02:15.740 PasswordAuthentication no 00:02:15.740 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:15.740 IdentitiesOnly yes 00:02:15.740 LogLevel FATAL 00:02:15.740 ForwardAgent yes 00:02:15.740 ForwardX11 yes 00:02:15.740 00:02:15.752 [Pipeline] withEnv 00:02:15.754 [Pipeline] { 00:02:15.767 [Pipeline] sh 00:02:16.045 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:16.046 source /etc/os-release 00:02:16.046 [[ -e /image.version ]] && img=$(< /image.version) 00:02:16.046 # Minimal, systemd-like check. 00:02:16.046 if [[ -e /.dockerenv ]]; then 00:02:16.046 # Clear garbage from the node's name: 00:02:16.046 # agt-er_autotest_547-896 -> autotest_547-896 00:02:16.046 # $HOSTNAME is the actual container id 00:02:16.046 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:16.046 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:16.046 # We can assume this is a mount from a host where container is running, 00:02:16.046 # so fetch its hostname to easily identify the target swarm worker. 00:02:16.046 container="$(< /etc/hostname) ($agent)" 00:02:16.046 else 00:02:16.046 # Fallback 00:02:16.046 container=$agent 00:02:16.046 fi 00:02:16.046 fi 00:02:16.046 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:16.046 00:02:16.316 [Pipeline] } 00:02:16.331 [Pipeline] // withEnv 00:02:16.340 [Pipeline] setCustomBuildProperty 00:02:16.355 [Pipeline] stage 00:02:16.357 [Pipeline] { (Tests) 00:02:16.373 [Pipeline] sh 00:02:16.653 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:16.666 [Pipeline] sh 00:02:16.945 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:16.960 [Pipeline] timeout 00:02:16.960 Timeout set to expire in 1 hr 30 min 00:02:16.962 [Pipeline] { 00:02:16.976 [Pipeline] sh 00:02:17.257 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:17.824 HEAD is now at 514198259 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:02:17.837 [Pipeline] sh 00:02:18.117 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:18.389 [Pipeline] sh 00:02:18.671 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:18.702 [Pipeline] sh 00:02:19.025 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:19.025 ++ readlink -f spdk_repo 00:02:19.284 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:19.284 + [[ -n /home/vagrant/spdk_repo ]] 00:02:19.284 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:19.284 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:19.284 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:19.284 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:19.284 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:19.284 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:19.284 + cd /home/vagrant/spdk_repo 00:02:19.284 + source /etc/os-release 00:02:19.284 ++ NAME='Fedora Linux' 00:02:19.284 ++ VERSION='39 (Cloud Edition)' 00:02:19.284 ++ ID=fedora 00:02:19.284 ++ VERSION_ID=39 00:02:19.284 ++ VERSION_CODENAME= 00:02:19.284 ++ PLATFORM_ID=platform:f39 00:02:19.284 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:19.284 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:19.284 ++ LOGO=fedora-logo-icon 00:02:19.284 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:19.284 ++ HOME_URL=https://fedoraproject.org/ 00:02:19.284 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:19.284 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:19.284 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:19.284 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:19.284 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:19.284 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:19.284 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:19.284 ++ SUPPORT_END=2024-11-12 00:02:19.284 ++ VARIANT='Cloud Edition' 00:02:19.284 ++ VARIANT_ID=cloud 00:02:19.284 + uname -a 00:02:19.284 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:19.284 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:19.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:19.802 Hugepages 00:02:19.802 node hugesize free / total 00:02:19.802 node0 1048576kB 0 / 0 00:02:19.802 node0 2048kB 0 / 0 00:02:19.802 00:02:19.802 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.802 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:19.802 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:19.802 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:19.802 + rm -f /tmp/spdk-ld-path 00:02:19.802 + source autorun-spdk.conf 00:02:19.802 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.802 ++ SPDK_RUN_ASAN=1 00:02:19.802 ++ SPDK_RUN_UBSAN=1 00:02:19.802 ++ SPDK_TEST_RAID=1 00:02:19.802 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.802 ++ RUN_NIGHTLY=0 00:02:19.802 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:19.802 + [[ -n '' ]] 00:02:19.802 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:19.802 + for M in /var/spdk/build-*-manifest.txt 00:02:19.802 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:19.802 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.802 + for M in /var/spdk/build-*-manifest.txt 00:02:19.802 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:19.802 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.802 + for M in /var/spdk/build-*-manifest.txt 00:02:19.802 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:19.802 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.802 ++ uname 00:02:19.802 + [[ Linux == \L\i\n\u\x ]] 00:02:19.802 + sudo dmesg -T 00:02:19.802 + sudo dmesg --clear 00:02:19.802 + dmesg_pid=5205 00:02:19.802 + [[ Fedora Linux == FreeBSD ]] 00:02:19.802 + sudo dmesg -Tw 00:02:19.802 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.802 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.802 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:19.802 + [[ -x /usr/src/fio-static/fio ]] 00:02:19.802 + export FIO_BIN=/usr/src/fio-static/fio 00:02:19.802 + FIO_BIN=/usr/src/fio-static/fio 00:02:19.802 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:19.802 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:19.802 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:19.802 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.802 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.802 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:19.802 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.802 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.802 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.061 11:14:02 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:20.061 11:14:02 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.061 11:14:02 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.061 11:14:02 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:20.061 11:14:02 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:20.061 11:14:02 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:20.061 11:14:02 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.061 11:14:02 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:20.061 11:14:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:20.061 11:14:02 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.061 11:14:02 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:20.061 11:14:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.061 11:14:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:20.061 11:14:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.061 11:14:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.061 11:14:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.061 11:14:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.061 11:14:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.061 11:14:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.061 11:14:02 -- paths/export.sh@5 -- $ export PATH 00:02:20.061 11:14:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.061 11:14:02 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:20.061 11:14:02 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:20.061 11:14:02 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731669242.XXXXXX 00:02:20.061 11:14:02 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731669242.vb2kxA 00:02:20.061 11:14:02 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:20.061 11:14:02 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:20.061 11:14:02 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:20.061 11:14:02 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:20.061 11:14:02 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.061 11:14:02 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:20.062 11:14:02 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:20.062 11:14:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.062 11:14:02 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:20.062 11:14:02 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:20.062 11:14:02 -- pm/common@17 -- $ local monitor 00:02:20.062 11:14:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.062 11:14:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.062 11:14:02 -- pm/common@25 -- $ sleep 1 00:02:20.062 11:14:02 -- pm/common@21 -- $ date +%s 00:02:20.062 11:14:02 -- pm/common@21 -- $ date +%s 00:02:20.062 11:14:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731669242 00:02:20.062 11:14:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731669242 00:02:20.062 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731669242_collect-cpu-load.pm.log 00:02:20.062 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731669242_collect-vmstat.pm.log 00:02:20.996 11:14:03 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:20.996 11:14:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.996 11:14:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.996 11:14:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:20.996 11:14:03 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.996 Fri Nov 15 11:14:03 AM UTC 2024 00:02:20.996 11:14:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.996 v25.01-pre-215-g514198259 00:02:20.996 11:14:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:20.996 11:14:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:20.996 11:14:03 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:20.996 11:14:03 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:20.996 11:14:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.996 ************************************ 00:02:20.996 START TEST asan 00:02:20.996 ************************************ 00:02:20.996 using asan 00:02:20.996 11:14:03 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:20.996 00:02:20.996 real 0m0.000s 00:02:20.996 user 0m0.000s 00:02:20.996 sys 0m0.000s 00:02:20.996 11:14:03 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:20.996 11:14:03 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:20.996 ************************************ 00:02:20.996 END TEST asan 00:02:20.996 ************************************ 00:02:20.996 11:14:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:20.996 11:14:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:20.996 11:14:03 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:20.996 11:14:03 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:20.996 11:14:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.996 ************************************ 00:02:20.996 START TEST ubsan 00:02:20.996 ************************************ 00:02:20.996 using ubsan 00:02:20.996 11:14:03 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:20.996 00:02:20.996 real 0m0.000s 00:02:20.996 user 0m0.000s 00:02:20.996 sys 0m0.000s 00:02:20.996 11:14:03 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:20.996 11:14:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:20.996 ************************************ 00:02:20.996 END TEST ubsan 00:02:20.996 ************************************ 00:02:21.254 11:14:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:21.254 11:14:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:21.254 11:14:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:21.254 11:14:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:21.254 11:14:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:21.254 11:14:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:21.254 11:14:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:21.254 11:14:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:21.254 11:14:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:21.254 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:21.254 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.821 Using 'verbs' RDMA provider 00:02:37.788 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:49.989 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:49.989 Creating mk/config.mk...done. 00:02:49.989 Creating mk/cc.flags.mk...done. 00:02:49.989 Type 'make' to build. 00:02:49.989 11:14:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:49.989 11:14:32 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:49.989 11:14:32 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:49.989 11:14:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.989 ************************************ 00:02:49.989 START TEST make 00:02:49.989 ************************************ 00:02:49.989 11:14:32 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:49.989 make[1]: Nothing to be done for 'all'. 00:03:04.867 The Meson build system 00:03:04.867 Version: 1.5.0 00:03:04.867 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:04.867 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:04.867 Build type: native build 00:03:04.867 Program cat found: YES (/usr/bin/cat) 00:03:04.867 Project name: DPDK 00:03:04.867 Project version: 24.03.0 00:03:04.867 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:04.867 C linker for the host machine: cc ld.bfd 2.40-14 00:03:04.867 Host machine cpu family: x86_64 00:03:04.867 Host machine cpu: x86_64 00:03:04.867 Message: ## Building in Developer Mode ## 00:03:04.867 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:04.867 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:04.867 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:04.867 Program python3 found: YES (/usr/bin/python3) 00:03:04.867 Program cat found: YES (/usr/bin/cat) 00:03:04.867 Compiler for C supports arguments -march=native: YES 00:03:04.867 Checking for size of "void *" : 8 00:03:04.867 Checking for size of "void *" : 8 (cached) 00:03:04.867 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:04.867 Library m found: YES 00:03:04.867 Library numa found: YES 00:03:04.867 Has header "numaif.h" : YES 00:03:04.867 Library fdt found: NO 00:03:04.867 Library execinfo found: NO 00:03:04.867 Has header "execinfo.h" : YES 00:03:04.867 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:04.867 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:04.867 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:04.867 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:04.867 Run-time dependency openssl found: YES 3.1.1 00:03:04.867 Run-time dependency libpcap found: YES 1.10.4 00:03:04.867 Has header "pcap.h" with dependency libpcap: YES 00:03:04.867 Compiler for C supports arguments -Wcast-qual: YES 00:03:04.867 Compiler for C supports arguments -Wdeprecated: YES 00:03:04.867 Compiler for C supports arguments -Wformat: YES 00:03:04.867 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:04.867 Compiler for C supports arguments -Wformat-security: NO 00:03:04.867 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:04.867 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:04.867 Compiler for C supports arguments -Wnested-externs: YES 00:03:04.867 Compiler for C supports arguments -Wold-style-definition: YES 00:03:04.867 Compiler for C supports arguments -Wpointer-arith: YES 00:03:04.867 Compiler for C supports arguments -Wsign-compare: YES 00:03:04.867 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:04.867 Compiler for C supports arguments -Wundef: YES 00:03:04.867 Compiler for C supports arguments -Wwrite-strings: YES 00:03:04.867 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:04.867 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:04.867 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:04.867 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:04.867 Program objdump found: YES (/usr/bin/objdump) 00:03:04.867 Compiler for C supports arguments -mavx512f: YES 00:03:04.867 Checking if "AVX512 checking" compiles: YES 00:03:04.867 Fetching value of define "__SSE4_2__" : 1 00:03:04.867 Fetching value of define "__AES__" : 1 00:03:04.867 Fetching value of define "__AVX__" : 1 00:03:04.867 Fetching value of define "__AVX2__" : 1 00:03:04.867 Fetching value of define "__AVX512BW__" : (undefined) 00:03:04.867 Fetching value of define "__AVX512CD__" : (undefined) 00:03:04.867 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:04.867 Fetching value of define "__AVX512F__" : (undefined) 00:03:04.867 Fetching value of define "__AVX512VL__" : (undefined) 00:03:04.867 Fetching value of define "__PCLMUL__" : 1 00:03:04.867 Fetching value of define "__RDRND__" : 1 00:03:04.867 Fetching value of define "__RDSEED__" : 1 00:03:04.867 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:04.867 Fetching value of define "__znver1__" : (undefined) 00:03:04.867 Fetching value of define "__znver2__" : (undefined) 00:03:04.867 Fetching value of define "__znver3__" : (undefined) 00:03:04.867 Fetching value of define "__znver4__" : (undefined) 00:03:04.867 Library asan found: YES 00:03:04.867 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:04.867 Message: lib/log: Defining dependency "log" 00:03:04.867 Message: lib/kvargs: Defining dependency "kvargs" 00:03:04.867 Message: lib/telemetry: Defining dependency "telemetry" 00:03:04.867 Library rt found: YES 00:03:04.867 Checking for function "getentropy" : NO 00:03:04.867 Message: lib/eal: Defining dependency "eal" 00:03:04.867 Message: lib/ring: Defining dependency "ring" 00:03:04.867 Message: lib/rcu: Defining dependency "rcu" 00:03:04.867 Message: lib/mempool: Defining dependency "mempool" 00:03:04.867 Message: lib/mbuf: Defining dependency "mbuf" 00:03:04.867 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:04.867 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:04.867 Compiler for C supports arguments -mpclmul: YES 00:03:04.867 Compiler for C supports arguments -maes: YES 00:03:04.867 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:04.867 Compiler for C supports arguments -mavx512bw: YES 00:03:04.867 Compiler for C supports arguments -mavx512dq: YES 00:03:04.867 Compiler for C supports arguments -mavx512vl: YES 00:03:04.867 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:04.867 Compiler for C supports arguments -mavx2: YES 00:03:04.867 Compiler for C supports arguments -mavx: YES 00:03:04.867 Message: lib/net: Defining dependency "net" 00:03:04.867 Message: lib/meter: Defining dependency "meter" 00:03:04.867 Message: lib/ethdev: Defining dependency "ethdev" 00:03:04.867 Message: lib/pci: Defining dependency "pci" 00:03:04.867 Message: lib/cmdline: Defining dependency "cmdline" 00:03:04.867 Message: lib/hash: Defining dependency "hash" 00:03:04.867 Message: lib/timer: Defining dependency "timer" 00:03:04.867 Message: lib/compressdev: Defining dependency "compressdev" 00:03:04.867 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:04.867 Message: lib/dmadev: Defining dependency "dmadev" 00:03:04.867 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:04.867 Message: lib/power: Defining dependency "power" 00:03:04.867 Message: lib/reorder: Defining dependency "reorder" 00:03:04.867 Message: lib/security: Defining dependency "security" 00:03:04.867 Has header "linux/userfaultfd.h" : YES 00:03:04.867 Has header "linux/vduse.h" : YES 00:03:04.867 Message: lib/vhost: Defining dependency "vhost" 00:03:04.867 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:04.867 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:04.867 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:04.867 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:04.867 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:04.867 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:04.867 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:04.867 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:04.867 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:04.867 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:04.867 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:04.867 Configuring doxy-api-html.conf using configuration 00:03:04.867 Configuring doxy-api-man.conf using configuration 00:03:04.867 Program mandb found: YES (/usr/bin/mandb) 00:03:04.867 Program sphinx-build found: NO 00:03:04.867 Configuring rte_build_config.h using configuration 00:03:04.867 Message: 00:03:04.867 ================= 00:03:04.867 Applications Enabled 00:03:04.867 ================= 00:03:04.867 00:03:04.867 apps: 00:03:04.867 00:03:04.867 00:03:04.867 Message: 00:03:04.867 ================= 00:03:04.867 Libraries Enabled 00:03:04.867 ================= 00:03:04.867 00:03:04.867 libs: 00:03:04.867 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:04.867 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:04.867 cryptodev, dmadev, power, reorder, security, vhost, 00:03:04.867 00:03:04.867 Message: 00:03:04.867 =============== 00:03:04.867 Drivers Enabled 00:03:04.867 =============== 00:03:04.867 00:03:04.867 common: 00:03:04.867 00:03:04.867 bus: 00:03:04.867 pci, vdev, 00:03:04.867 mempool: 00:03:04.867 ring, 00:03:04.867 dma: 00:03:04.867 00:03:04.867 net: 00:03:04.867 00:03:04.867 crypto: 00:03:04.867 00:03:04.867 compress: 00:03:04.867 00:03:04.867 vdpa: 00:03:04.867 00:03:04.867 00:03:04.867 Message: 00:03:04.867 ================= 00:03:04.867 Content Skipped 00:03:04.867 ================= 00:03:04.867 00:03:04.867 apps: 00:03:04.867 dumpcap: explicitly disabled via build config 00:03:04.867 graph: explicitly disabled via build config 00:03:04.867 pdump: explicitly disabled via build config 00:03:04.867 proc-info: explicitly disabled via build config 00:03:04.867 test-acl: explicitly disabled via build config 00:03:04.867 test-bbdev: explicitly disabled via build config 00:03:04.868 test-cmdline: explicitly disabled via build config 00:03:04.868 test-compress-perf: explicitly disabled via build config 00:03:04.868 test-crypto-perf: explicitly disabled via build config 00:03:04.868 test-dma-perf: explicitly disabled via build config 00:03:04.868 test-eventdev: explicitly disabled via build config 00:03:04.868 test-fib: explicitly disabled via build config 00:03:04.868 test-flow-perf: explicitly disabled via build config 00:03:04.868 test-gpudev: explicitly disabled via build config 00:03:04.868 test-mldev: explicitly disabled via build config 00:03:04.868 test-pipeline: explicitly disabled via build config 00:03:04.868 test-pmd: explicitly disabled via build config 00:03:04.868 test-regex: explicitly disabled via build config 00:03:04.868 test-sad: explicitly disabled via build config 00:03:04.868 test-security-perf: explicitly disabled via build config 00:03:04.868 00:03:04.868 libs: 00:03:04.868 argparse: explicitly disabled via build config 00:03:04.868 metrics: explicitly disabled via build config 00:03:04.868 acl: explicitly disabled via build config 00:03:04.868 bbdev: explicitly disabled via build config 00:03:04.868 bitratestats: explicitly disabled via build config 00:03:04.868 bpf: explicitly disabled via build config 00:03:04.868 cfgfile: explicitly disabled via build config 00:03:04.868 distributor: explicitly disabled via build config 00:03:04.868 efd: explicitly disabled via build config 00:03:04.868 eventdev: explicitly disabled via build config 00:03:04.868 dispatcher: explicitly disabled via build config 00:03:04.868 gpudev: explicitly disabled via build config 00:03:04.868 gro: explicitly disabled via build config 00:03:04.868 gso: explicitly disabled via build config 00:03:04.868 ip_frag: explicitly disabled via build config 00:03:04.868 jobstats: explicitly disabled via build config 00:03:04.868 latencystats: explicitly disabled via build config 00:03:04.868 lpm: explicitly disabled via build config 00:03:04.868 member: explicitly disabled via build config 00:03:04.868 pcapng: explicitly disabled via build config 00:03:04.868 rawdev: explicitly disabled via build config 00:03:04.868 regexdev: explicitly disabled via build config 00:03:04.868 mldev: explicitly disabled via build config 00:03:04.868 rib: explicitly disabled via build config 00:03:04.868 sched: explicitly disabled via build config 00:03:04.868 stack: explicitly disabled via build config 00:03:04.868 ipsec: explicitly disabled via build config 00:03:04.868 pdcp: explicitly disabled via build config 00:03:04.868 fib: explicitly disabled via build config 00:03:04.868 port: explicitly disabled via build config 00:03:04.868 pdump: explicitly disabled via build config 00:03:04.868 table: explicitly disabled via build config 00:03:04.868 pipeline: explicitly disabled via build config 00:03:04.868 graph: explicitly disabled via build config 00:03:04.868 node: explicitly disabled via build config 00:03:04.868 00:03:04.868 drivers: 00:03:04.868 common/cpt: not in enabled drivers build config 00:03:04.868 common/dpaax: not in enabled drivers build config 00:03:04.868 common/iavf: not in enabled drivers build config 00:03:04.868 common/idpf: not in enabled drivers build config 00:03:04.868 common/ionic: not in enabled drivers build config 00:03:04.868 common/mvep: not in enabled drivers build config 00:03:04.868 common/octeontx: not in enabled drivers build config 00:03:04.868 bus/auxiliary: not in enabled drivers build config 00:03:04.868 bus/cdx: not in enabled drivers build config 00:03:04.868 bus/dpaa: not in enabled drivers build config 00:03:04.868 bus/fslmc: not in enabled drivers build config 00:03:04.868 bus/ifpga: not in enabled drivers build config 00:03:04.868 bus/platform: not in enabled drivers build config 00:03:04.868 bus/uacce: not in enabled drivers build config 00:03:04.868 bus/vmbus: not in enabled drivers build config 00:03:04.868 common/cnxk: not in enabled drivers build config 00:03:04.868 common/mlx5: not in enabled drivers build config 00:03:04.868 common/nfp: not in enabled drivers build config 00:03:04.868 common/nitrox: not in enabled drivers build config 00:03:04.868 common/qat: not in enabled drivers build config 00:03:04.868 common/sfc_efx: not in enabled drivers build config 00:03:04.868 mempool/bucket: not in enabled drivers build config 00:03:04.868 mempool/cnxk: not in enabled drivers build config 00:03:04.868 mempool/dpaa: not in enabled drivers build config 00:03:04.868 mempool/dpaa2: not in enabled drivers build config 00:03:04.868 mempool/octeontx: not in enabled drivers build config 00:03:04.868 mempool/stack: not in enabled drivers build config 00:03:04.868 dma/cnxk: not in enabled drivers build config 00:03:04.868 dma/dpaa: not in enabled drivers build config 00:03:04.868 dma/dpaa2: not in enabled drivers build config 00:03:04.868 dma/hisilicon: not in enabled drivers build config 00:03:04.868 dma/idxd: not in enabled drivers build config 00:03:04.868 dma/ioat: not in enabled drivers build config 00:03:04.868 dma/skeleton: not in enabled drivers build config 00:03:04.868 net/af_packet: not in enabled drivers build config 00:03:04.868 net/af_xdp: not in enabled drivers build config 00:03:04.868 net/ark: not in enabled drivers build config 00:03:04.868 net/atlantic: not in enabled drivers build config 00:03:04.868 net/avp: not in enabled drivers build config 00:03:04.868 net/axgbe: not in enabled drivers build config 00:03:04.868 net/bnx2x: not in enabled drivers build config 00:03:04.868 net/bnxt: not in enabled drivers build config 00:03:04.868 net/bonding: not in enabled drivers build config 00:03:04.868 net/cnxk: not in enabled drivers build config 00:03:04.868 net/cpfl: not in enabled drivers build config 00:03:04.868 net/cxgbe: not in enabled drivers build config 00:03:04.868 net/dpaa: not in enabled drivers build config 00:03:04.868 net/dpaa2: not in enabled drivers build config 00:03:04.868 net/e1000: not in enabled drivers build config 00:03:04.868 net/ena: not in enabled drivers build config 00:03:04.868 net/enetc: not in enabled drivers build config 00:03:04.868 net/enetfec: not in enabled drivers build config 00:03:04.868 net/enic: not in enabled drivers build config 00:03:04.868 net/failsafe: not in enabled drivers build config 00:03:04.868 net/fm10k: not in enabled drivers build config 00:03:04.868 net/gve: not in enabled drivers build config 00:03:04.868 net/hinic: not in enabled drivers build config 00:03:04.868 net/hns3: not in enabled drivers build config 00:03:04.868 net/i40e: not in enabled drivers build config 00:03:04.868 net/iavf: not in enabled drivers build config 00:03:04.868 net/ice: not in enabled drivers build config 00:03:04.868 net/idpf: not in enabled drivers build config 00:03:04.868 net/igc: not in enabled drivers build config 00:03:04.868 net/ionic: not in enabled drivers build config 00:03:04.868 net/ipn3ke: not in enabled drivers build config 00:03:04.868 net/ixgbe: not in enabled drivers build config 00:03:04.868 net/mana: not in enabled drivers build config 00:03:04.868 net/memif: not in enabled drivers build config 00:03:04.868 net/mlx4: not in enabled drivers build config 00:03:04.868 net/mlx5: not in enabled drivers build config 00:03:04.868 net/mvneta: not in enabled drivers build config 00:03:04.868 net/mvpp2: not in enabled drivers build config 00:03:04.868 net/netvsc: not in enabled drivers build config 00:03:04.868 net/nfb: not in enabled drivers build config 00:03:04.868 net/nfp: not in enabled drivers build config 00:03:04.868 net/ngbe: not in enabled drivers build config 00:03:04.868 net/null: not in enabled drivers build config 00:03:04.868 net/octeontx: not in enabled drivers build config 00:03:04.868 net/octeon_ep: not in enabled drivers build config 00:03:04.868 net/pcap: not in enabled drivers build config 00:03:04.868 net/pfe: not in enabled drivers build config 00:03:04.868 net/qede: not in enabled drivers build config 00:03:04.868 net/ring: not in enabled drivers build config 00:03:04.868 net/sfc: not in enabled drivers build config 00:03:04.868 net/softnic: not in enabled drivers build config 00:03:04.868 net/tap: not in enabled drivers build config 00:03:04.868 net/thunderx: not in enabled drivers build config 00:03:04.868 net/txgbe: not in enabled drivers build config 00:03:04.868 net/vdev_netvsc: not in enabled drivers build config 00:03:04.868 net/vhost: not in enabled drivers build config 00:03:04.868 net/virtio: not in enabled drivers build config 00:03:04.868 net/vmxnet3: not in enabled drivers build config 00:03:04.868 raw/*: missing internal dependency, "rawdev" 00:03:04.868 crypto/armv8: not in enabled drivers build config 00:03:04.868 crypto/bcmfs: not in enabled drivers build config 00:03:04.868 crypto/caam_jr: not in enabled drivers build config 00:03:04.868 crypto/ccp: not in enabled drivers build config 00:03:04.868 crypto/cnxk: not in enabled drivers build config 00:03:04.868 crypto/dpaa_sec: not in enabled drivers build config 00:03:04.868 crypto/dpaa2_sec: not in enabled drivers build config 00:03:04.868 crypto/ipsec_mb: not in enabled drivers build config 00:03:04.868 crypto/mlx5: not in enabled drivers build config 00:03:04.868 crypto/mvsam: not in enabled drivers build config 00:03:04.868 crypto/nitrox: not in enabled drivers build config 00:03:04.868 crypto/null: not in enabled drivers build config 00:03:04.868 crypto/octeontx: not in enabled drivers build config 00:03:04.868 crypto/openssl: not in enabled drivers build config 00:03:04.868 crypto/scheduler: not in enabled drivers build config 00:03:04.868 crypto/uadk: not in enabled drivers build config 00:03:04.868 crypto/virtio: not in enabled drivers build config 00:03:04.868 compress/isal: not in enabled drivers build config 00:03:04.868 compress/mlx5: not in enabled drivers build config 00:03:04.868 compress/nitrox: not in enabled drivers build config 00:03:04.868 compress/octeontx: not in enabled drivers build config 00:03:04.868 compress/zlib: not in enabled drivers build config 00:03:04.868 regex/*: missing internal dependency, "regexdev" 00:03:04.868 ml/*: missing internal dependency, "mldev" 00:03:04.868 vdpa/ifc: not in enabled drivers build config 00:03:04.868 vdpa/mlx5: not in enabled drivers build config 00:03:04.868 vdpa/nfp: not in enabled drivers build config 00:03:04.868 vdpa/sfc: not in enabled drivers build config 00:03:04.868 event/*: missing internal dependency, "eventdev" 00:03:04.868 baseband/*: missing internal dependency, "bbdev" 00:03:04.868 gpu/*: missing internal dependency, "gpudev" 00:03:04.868 00:03:04.868 00:03:04.868 Build targets in project: 85 00:03:04.868 00:03:04.868 DPDK 24.03.0 00:03:04.868 00:03:04.868 User defined options 00:03:04.868 buildtype : debug 00:03:04.868 default_library : shared 00:03:04.868 libdir : lib 00:03:04.868 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:04.868 b_sanitize : address 00:03:04.869 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:04.869 c_link_args : 00:03:04.869 cpu_instruction_set: native 00:03:04.869 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:04.869 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:04.869 enable_docs : false 00:03:04.869 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:04.869 enable_kmods : false 00:03:04.869 max_lcores : 128 00:03:04.869 tests : false 00:03:04.869 00:03:04.869 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:04.869 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:04.869 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:04.869 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:04.869 [3/268] Linking static target lib/librte_kvargs.a 00:03:04.869 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:04.869 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:04.869 [6/268] Linking static target lib/librte_log.a 00:03:04.869 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.869 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.869 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:04.869 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:04.869 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.869 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.869 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.869 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.869 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.869 [16/268] Linking static target lib/librte_telemetry.a 00:03:04.869 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.869 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.869 [19/268] Linking target lib/librte_log.so.24.1 00:03:05.127 [20/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:05.127 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:05.127 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:05.385 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:05.385 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:05.385 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:05.385 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:05.385 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:05.385 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:05.385 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.385 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:05.643 [31/268] Linking target lib/librte_telemetry.so.24.1 00:03:05.643 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:05.901 [33/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:06.160 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.160 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:06.160 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:06.419 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:06.419 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:06.419 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:06.419 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:06.419 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:06.419 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:06.419 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:06.678 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:06.678 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:06.937 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.937 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:06.937 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:07.503 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:07.503 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:07.503 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:07.762 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:07.762 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:07.762 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:07.762 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:07.762 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.020 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.020 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.020 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.278 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.278 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.278 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.537 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.537 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.537 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.537 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.537 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.796 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.796 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.796 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.796 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:09.054 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:09.054 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:09.312 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:09.312 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:09.312 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:09.312 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:09.312 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.570 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.570 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:09.829 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.829 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.829 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:10.088 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:10.088 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:10.088 [86/268] Linking static target lib/librte_eal.a 00:03:10.346 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:10.346 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:10.346 [89/268] Linking static target lib/librte_rcu.a 00:03:10.346 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:10.347 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:10.347 [92/268] Linking static target lib/librte_ring.a 00:03:10.347 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:10.347 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:10.347 [95/268] Linking static target lib/librte_mempool.a 00:03:10.605 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:10.863 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.863 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.863 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.863 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:11.122 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.380 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:11.380 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:11.380 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:11.380 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:11.380 [106/268] Linking static target lib/librte_meter.a 00:03:11.638 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:11.638 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:11.638 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:11.638 [110/268] Linking static target lib/librte_mbuf.a 00:03:11.638 [111/268] Linking static target lib/librte_net.a 00:03:11.638 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.897 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.154 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:12.154 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.154 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:12.154 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:12.412 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:12.669 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:12.669 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.927 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:13.186 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:13.444 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:13.444 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:13.444 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:13.702 [126/268] Linking static target lib/librte_pci.a 00:03:13.702 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:13.702 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:13.960 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:13.960 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:13.960 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:13.960 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:13.960 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.960 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.960 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:14.218 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:14.218 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:14.218 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:14.218 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:14.218 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:14.218 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:14.218 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:14.218 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:14.218 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:14.477 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:14.477 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:14.735 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:14.735 [148/268] Linking static target lib/librte_cmdline.a 00:03:14.735 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:14.735 [150/268] Linking static target lib/librte_ethdev.a 00:03:15.034 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:15.034 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:15.034 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:15.034 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:15.034 [155/268] Linking static target lib/librte_timer.a 00:03:15.318 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:15.576 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:15.576 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:15.834 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.834 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:15.834 [161/268] Linking static target lib/librte_compressdev.a 00:03:15.834 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:15.834 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:16.097 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:16.097 [165/268] Linking static target lib/librte_hash.a 00:03:16.361 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:16.361 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:16.361 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.618 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:16.618 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:16.618 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:16.618 [172/268] Linking static target lib/librte_dmadev.a 00:03:16.618 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:16.875 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.133 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:17.133 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:17.391 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:17.391 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.391 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:17.391 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:17.391 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:17.391 [182/268] Linking static target lib/librte_cryptodev.a 00:03:17.649 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:17.649 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.907 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:17.907 [186/268] Linking static target lib/librte_power.a 00:03:18.166 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:18.166 [188/268] Linking static target lib/librte_reorder.a 00:03:18.166 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:18.424 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:18.424 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:18.424 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:18.424 [193/268] Linking static target lib/librte_security.a 00:03:18.682 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.247 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:19.247 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.505 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.505 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:19.505 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:19.763 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:20.022 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:20.022 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:20.281 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.281 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.281 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:20.539 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.539 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:20.798 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.798 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.798 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.798 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:21.057 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:21.057 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:21.057 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:21.057 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:21.057 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:21.057 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.057 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.057 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:21.057 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:21.057 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:21.364 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.364 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:21.364 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.364 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.365 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:21.622 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.557 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:22.557 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.815 [230/268] Linking target lib/librte_eal.so.24.1 00:03:22.815 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:22.815 [232/268] Linking target lib/librte_pci.so.24.1 00:03:23.072 [233/268] Linking target lib/librte_meter.so.24.1 00:03:23.072 [234/268] Linking target lib/librte_ring.so.24.1 00:03:23.072 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:23.072 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:23.072 [237/268] Linking target lib/librte_timer.so.24.1 00:03:23.072 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:23.072 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:23.072 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:23.072 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:23.072 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:23.072 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:23.072 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:23.072 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:23.330 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:23.330 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:23.330 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:23.330 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:23.330 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.589 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:23.589 [252/268] Linking target lib/librte_net.so.24.1 00:03:23.589 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:23.589 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:23.589 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:23.589 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:23.589 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:23.847 [258/268] Linking target lib/librte_security.so.24.1 00:03:23.847 [259/268] Linking target lib/librte_hash.so.24.1 00:03:23.847 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:23.847 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:23.847 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:24.105 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:24.105 [264/268] Linking target lib/librte_power.so.24.1 00:03:26.634 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:26.634 [266/268] Linking static target lib/librte_vhost.a 00:03:28.534 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.534 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:28.534 INFO: autodetecting backend as ninja 00:03:28.534 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:50.471 CC lib/log/log.o 00:03:50.471 CC lib/log/log_flags.o 00:03:50.471 CC lib/ut_mock/mock.o 00:03:50.471 CC lib/log/log_deprecated.o 00:03:50.471 CC lib/ut/ut.o 00:03:50.471 LIB libspdk_log.a 00:03:50.471 LIB libspdk_ut_mock.a 00:03:50.471 SO libspdk_ut_mock.so.6.0 00:03:50.471 SO libspdk_log.so.7.1 00:03:50.471 LIB libspdk_ut.a 00:03:50.471 SO libspdk_ut.so.2.0 00:03:50.471 SYMLINK libspdk_ut_mock.so 00:03:50.471 SYMLINK libspdk_log.so 00:03:50.471 SYMLINK libspdk_ut.so 00:03:50.472 CC lib/util/base64.o 00:03:50.472 CC lib/util/cpuset.o 00:03:50.472 CC lib/util/bit_array.o 00:03:50.472 CC lib/util/crc16.o 00:03:50.472 CC lib/util/crc32c.o 00:03:50.472 CC lib/util/crc32.o 00:03:50.472 CXX lib/trace_parser/trace.o 00:03:50.472 CC lib/ioat/ioat.o 00:03:50.472 CC lib/dma/dma.o 00:03:50.472 CC lib/vfio_user/host/vfio_user_pci.o 00:03:50.472 CC lib/vfio_user/host/vfio_user.o 00:03:50.472 CC lib/util/crc32_ieee.o 00:03:50.472 CC lib/util/crc64.o 00:03:50.472 CC lib/util/dif.o 00:03:50.472 CC lib/util/fd.o 00:03:50.472 CC lib/util/fd_group.o 00:03:50.472 LIB libspdk_dma.a 00:03:50.472 SO libspdk_dma.so.5.0 00:03:50.472 CC lib/util/file.o 00:03:50.472 CC lib/util/hexlify.o 00:03:50.472 SYMLINK libspdk_dma.so 00:03:50.472 CC lib/util/iov.o 00:03:50.472 CC lib/util/math.o 00:03:50.472 LIB libspdk_ioat.a 00:03:50.472 CC lib/util/net.o 00:03:50.472 SO libspdk_ioat.so.7.0 00:03:50.472 LIB libspdk_vfio_user.a 00:03:50.472 SYMLINK libspdk_ioat.so 00:03:50.472 CC lib/util/pipe.o 00:03:50.472 CC lib/util/strerror_tls.o 00:03:50.472 CC lib/util/string.o 00:03:50.472 SO libspdk_vfio_user.so.5.0 00:03:50.472 CC lib/util/uuid.o 00:03:50.472 CC lib/util/xor.o 00:03:50.472 CC lib/util/zipf.o 00:03:50.472 SYMLINK libspdk_vfio_user.so 00:03:50.472 CC lib/util/md5.o 00:03:50.472 LIB libspdk_util.a 00:03:50.472 LIB libspdk_trace_parser.a 00:03:50.472 SO libspdk_util.so.10.1 00:03:50.472 SO libspdk_trace_parser.so.6.0 00:03:50.472 SYMLINK libspdk_trace_parser.so 00:03:50.472 SYMLINK libspdk_util.so 00:03:50.472 CC lib/conf/conf.o 00:03:50.472 CC lib/vmd/vmd.o 00:03:50.472 CC lib/vmd/led.o 00:03:50.472 CC lib/idxd/idxd.o 00:03:50.472 CC lib/json/json_parse.o 00:03:50.472 CC lib/idxd/idxd_kernel.o 00:03:50.472 CC lib/idxd/idxd_user.o 00:03:50.472 CC lib/env_dpdk/env.o 00:03:50.472 CC lib/json/json_util.o 00:03:50.472 CC lib/rdma_utils/rdma_utils.o 00:03:50.472 CC lib/env_dpdk/memory.o 00:03:50.472 CC lib/env_dpdk/pci.o 00:03:50.472 LIB libspdk_conf.a 00:03:50.472 CC lib/json/json_write.o 00:03:50.472 CC lib/env_dpdk/init.o 00:03:50.472 SO libspdk_conf.so.6.0 00:03:50.472 SYMLINK libspdk_conf.so 00:03:50.472 CC lib/env_dpdk/threads.o 00:03:50.472 CC lib/env_dpdk/pci_ioat.o 00:03:50.472 LIB libspdk_rdma_utils.a 00:03:50.472 SO libspdk_rdma_utils.so.1.0 00:03:50.472 SYMLINK libspdk_rdma_utils.so 00:03:50.472 CC lib/env_dpdk/pci_virtio.o 00:03:50.472 CC lib/env_dpdk/pci_vmd.o 00:03:50.472 LIB libspdk_json.a 00:03:50.472 CC lib/env_dpdk/pci_idxd.o 00:03:50.472 CC lib/env_dpdk/pci_event.o 00:03:50.472 SO libspdk_json.so.6.0 00:03:50.472 CC lib/env_dpdk/sigbus_handler.o 00:03:50.472 CC lib/rdma_provider/common.o 00:03:50.730 SYMLINK libspdk_json.so 00:03:50.730 CC lib/env_dpdk/pci_dpdk.o 00:03:50.730 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:50.730 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:50.730 LIB libspdk_vmd.a 00:03:50.730 LIB libspdk_idxd.a 00:03:50.730 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:50.730 SO libspdk_vmd.so.6.0 00:03:50.730 SO libspdk_idxd.so.12.1 00:03:50.730 SYMLINK libspdk_vmd.so 00:03:50.730 SYMLINK libspdk_idxd.so 00:03:50.989 CC lib/jsonrpc/jsonrpc_server.o 00:03:50.989 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:50.989 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:50.989 CC lib/jsonrpc/jsonrpc_client.o 00:03:50.989 LIB libspdk_rdma_provider.a 00:03:50.989 SO libspdk_rdma_provider.so.7.0 00:03:50.989 SYMLINK libspdk_rdma_provider.so 00:03:51.249 LIB libspdk_jsonrpc.a 00:03:51.249 SO libspdk_jsonrpc.so.6.0 00:03:51.508 SYMLINK libspdk_jsonrpc.so 00:03:51.767 CC lib/rpc/rpc.o 00:03:51.767 LIB libspdk_env_dpdk.a 00:03:52.026 SO libspdk_env_dpdk.so.15.1 00:03:52.026 LIB libspdk_rpc.a 00:03:52.026 SO libspdk_rpc.so.6.0 00:03:52.026 SYMLINK libspdk_rpc.so 00:03:52.026 SYMLINK libspdk_env_dpdk.so 00:03:52.285 CC lib/trace/trace.o 00:03:52.285 CC lib/trace/trace_flags.o 00:03:52.285 CC lib/notify/notify_rpc.o 00:03:52.285 CC lib/trace/trace_rpc.o 00:03:52.285 CC lib/notify/notify.o 00:03:52.285 CC lib/keyring/keyring.o 00:03:52.285 CC lib/keyring/keyring_rpc.o 00:03:52.544 LIB libspdk_notify.a 00:03:52.544 SO libspdk_notify.so.6.0 00:03:52.544 SYMLINK libspdk_notify.so 00:03:52.544 LIB libspdk_keyring.a 00:03:52.544 LIB libspdk_trace.a 00:03:52.544 SO libspdk_keyring.so.2.0 00:03:52.802 SO libspdk_trace.so.11.0 00:03:52.802 SYMLINK libspdk_keyring.so 00:03:52.802 SYMLINK libspdk_trace.so 00:03:53.062 CC lib/sock/sock.o 00:03:53.062 CC lib/sock/sock_rpc.o 00:03:53.062 CC lib/thread/thread.o 00:03:53.062 CC lib/thread/iobuf.o 00:03:53.630 LIB libspdk_sock.a 00:03:53.630 SO libspdk_sock.so.10.0 00:03:53.630 SYMLINK libspdk_sock.so 00:03:53.888 CC lib/nvme/nvme_ctrlr.o 00:03:53.888 CC lib/nvme/nvme_fabric.o 00:03:53.888 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:53.888 CC lib/nvme/nvme_ns_cmd.o 00:03:53.888 CC lib/nvme/nvme_pcie.o 00:03:53.888 CC lib/nvme/nvme_pcie_common.o 00:03:53.888 CC lib/nvme/nvme_ns.o 00:03:53.888 CC lib/nvme/nvme_qpair.o 00:03:53.888 CC lib/nvme/nvme.o 00:03:54.822 CC lib/nvme/nvme_quirks.o 00:03:54.822 CC lib/nvme/nvme_transport.o 00:03:55.080 CC lib/nvme/nvme_discovery.o 00:03:55.080 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:55.080 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:55.080 CC lib/nvme/nvme_tcp.o 00:03:55.080 LIB libspdk_thread.a 00:03:55.339 SO libspdk_thread.so.11.0 00:03:55.339 CC lib/nvme/nvme_opal.o 00:03:55.339 SYMLINK libspdk_thread.so 00:03:55.339 CC lib/nvme/nvme_io_msg.o 00:03:55.339 CC lib/nvme/nvme_poll_group.o 00:03:55.598 CC lib/nvme/nvme_zns.o 00:03:55.856 CC lib/accel/accel.o 00:03:55.856 CC lib/nvme/nvme_stubs.o 00:03:55.856 CC lib/blob/blobstore.o 00:03:55.856 CC lib/init/json_config.o 00:03:55.856 CC lib/init/subsystem.o 00:03:56.114 CC lib/blob/request.o 00:03:56.114 CC lib/blob/zeroes.o 00:03:56.114 CC lib/blob/blob_bs_dev.o 00:03:56.114 CC lib/init/subsystem_rpc.o 00:03:56.372 CC lib/init/rpc.o 00:03:56.372 CC lib/nvme/nvme_auth.o 00:03:56.372 CC lib/nvme/nvme_cuse.o 00:03:56.372 CC lib/accel/accel_rpc.o 00:03:56.372 CC lib/accel/accel_sw.o 00:03:56.372 LIB libspdk_init.a 00:03:56.630 SO libspdk_init.so.6.0 00:03:56.630 CC lib/virtio/virtio.o 00:03:56.630 CC lib/virtio/virtio_vhost_user.o 00:03:56.630 SYMLINK libspdk_init.so 00:03:56.630 CC lib/virtio/virtio_vfio_user.o 00:03:56.888 CC lib/virtio/virtio_pci.o 00:03:56.888 CC lib/nvme/nvme_rdma.o 00:03:57.147 CC lib/fsdev/fsdev.o 00:03:57.147 CC lib/fsdev/fsdev_io.o 00:03:57.147 CC lib/fsdev/fsdev_rpc.o 00:03:57.147 CC lib/event/app.o 00:03:57.147 LIB libspdk_virtio.a 00:03:57.147 LIB libspdk_accel.a 00:03:57.147 CC lib/event/reactor.o 00:03:57.147 SO libspdk_virtio.so.7.0 00:03:57.147 SO libspdk_accel.so.16.0 00:03:57.404 SYMLINK libspdk_virtio.so 00:03:57.404 CC lib/event/log_rpc.o 00:03:57.404 SYMLINK libspdk_accel.so 00:03:57.404 CC lib/event/app_rpc.o 00:03:57.404 CC lib/event/scheduler_static.o 00:03:57.662 CC lib/bdev/bdev.o 00:03:57.662 CC lib/bdev/bdev_rpc.o 00:03:57.662 CC lib/bdev/bdev_zone.o 00:03:57.662 CC lib/bdev/scsi_nvme.o 00:03:57.662 CC lib/bdev/part.o 00:03:57.662 LIB libspdk_event.a 00:03:57.920 SO libspdk_event.so.14.0 00:03:57.920 LIB libspdk_fsdev.a 00:03:57.920 SO libspdk_fsdev.so.2.0 00:03:57.920 SYMLINK libspdk_event.so 00:03:57.920 SYMLINK libspdk_fsdev.so 00:03:58.209 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:58.783 LIB libspdk_nvme.a 00:03:59.042 SO libspdk_nvme.so.15.0 00:03:59.042 LIB libspdk_fuse_dispatcher.a 00:03:59.042 SO libspdk_fuse_dispatcher.so.1.0 00:03:59.300 SYMLINK libspdk_fuse_dispatcher.so 00:03:59.300 SYMLINK libspdk_nvme.so 00:04:00.235 LIB libspdk_blob.a 00:04:00.493 SO libspdk_blob.so.11.0 00:04:00.493 SYMLINK libspdk_blob.so 00:04:00.751 CC lib/lvol/lvol.o 00:04:00.751 CC lib/blobfs/blobfs.o 00:04:00.751 CC lib/blobfs/tree.o 00:04:01.340 LIB libspdk_bdev.a 00:04:01.599 SO libspdk_bdev.so.17.0 00:04:01.599 SYMLINK libspdk_bdev.so 00:04:01.859 CC lib/ublk/ublk.o 00:04:01.859 CC lib/ublk/ublk_rpc.o 00:04:01.859 CC lib/scsi/dev.o 00:04:01.859 CC lib/scsi/lun.o 00:04:01.859 CC lib/scsi/port.o 00:04:01.859 CC lib/nvmf/ctrlr.o 00:04:01.859 CC lib/nbd/nbd.o 00:04:01.859 CC lib/ftl/ftl_core.o 00:04:01.859 LIB libspdk_blobfs.a 00:04:02.117 SO libspdk_blobfs.so.10.0 00:04:02.117 CC lib/scsi/scsi.o 00:04:02.117 SYMLINK libspdk_blobfs.so 00:04:02.117 CC lib/nvmf/ctrlr_discovery.o 00:04:02.117 CC lib/nvmf/ctrlr_bdev.o 00:04:02.117 LIB libspdk_lvol.a 00:04:02.117 CC lib/scsi/scsi_bdev.o 00:04:02.117 SO libspdk_lvol.so.10.0 00:04:02.375 CC lib/scsi/scsi_pr.o 00:04:02.375 SYMLINK libspdk_lvol.so 00:04:02.375 CC lib/scsi/scsi_rpc.o 00:04:02.375 CC lib/ftl/ftl_init.o 00:04:02.375 CC lib/nbd/nbd_rpc.o 00:04:02.375 CC lib/ftl/ftl_layout.o 00:04:02.375 CC lib/nvmf/subsystem.o 00:04:02.634 CC lib/ftl/ftl_debug.o 00:04:02.634 LIB libspdk_nbd.a 00:04:02.634 SO libspdk_nbd.so.7.0 00:04:02.634 CC lib/ftl/ftl_io.o 00:04:02.634 SYMLINK libspdk_nbd.so 00:04:02.634 CC lib/scsi/task.o 00:04:02.634 CC lib/nvmf/nvmf.o 00:04:02.634 LIB libspdk_ublk.a 00:04:02.893 SO libspdk_ublk.so.3.0 00:04:02.893 CC lib/ftl/ftl_sb.o 00:04:02.893 CC lib/nvmf/nvmf_rpc.o 00:04:02.893 SYMLINK libspdk_ublk.so 00:04:02.893 CC lib/nvmf/transport.o 00:04:02.893 CC lib/ftl/ftl_l2p.o 00:04:02.893 LIB libspdk_scsi.a 00:04:02.893 CC lib/nvmf/tcp.o 00:04:02.893 CC lib/nvmf/stubs.o 00:04:03.151 CC lib/nvmf/mdns_server.o 00:04:03.151 SO libspdk_scsi.so.9.0 00:04:03.151 CC lib/ftl/ftl_l2p_flat.o 00:04:03.151 SYMLINK libspdk_scsi.so 00:04:03.151 CC lib/ftl/ftl_nv_cache.o 00:04:03.409 CC lib/ftl/ftl_band.o 00:04:03.409 CC lib/nvmf/rdma.o 00:04:03.667 CC lib/nvmf/auth.o 00:04:03.925 CC lib/ftl/ftl_band_ops.o 00:04:03.926 CC lib/ftl/ftl_writer.o 00:04:03.926 CC lib/ftl/ftl_rq.o 00:04:03.926 CC lib/iscsi/conn.o 00:04:04.184 CC lib/iscsi/init_grp.o 00:04:04.184 CC lib/iscsi/iscsi.o 00:04:04.184 CC lib/iscsi/param.o 00:04:04.184 CC lib/iscsi/portal_grp.o 00:04:04.184 CC lib/iscsi/tgt_node.o 00:04:04.441 CC lib/iscsi/iscsi_subsystem.o 00:04:04.441 CC lib/ftl/ftl_reloc.o 00:04:04.441 CC lib/iscsi/iscsi_rpc.o 00:04:04.699 CC lib/iscsi/task.o 00:04:04.699 CC lib/ftl/ftl_l2p_cache.o 00:04:04.699 CC lib/ftl/ftl_p2l.o 00:04:04.956 CC lib/vhost/vhost.o 00:04:04.956 CC lib/ftl/ftl_p2l_log.o 00:04:04.956 CC lib/ftl/mngt/ftl_mngt.o 00:04:04.956 CC lib/vhost/vhost_rpc.o 00:04:04.956 CC lib/vhost/vhost_scsi.o 00:04:04.956 CC lib/vhost/vhost_blk.o 00:04:05.214 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:05.214 CC lib/vhost/rte_vhost_user.o 00:04:05.472 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:05.472 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:05.472 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:05.472 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:05.729 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:05.729 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:05.729 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:05.986 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:05.986 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:05.986 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:05.986 LIB libspdk_iscsi.a 00:04:05.986 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:05.986 SO libspdk_iscsi.so.8.0 00:04:06.244 CC lib/ftl/utils/ftl_conf.o 00:04:06.244 CC lib/ftl/utils/ftl_md.o 00:04:06.244 CC lib/ftl/utils/ftl_mempool.o 00:04:06.244 CC lib/ftl/utils/ftl_bitmap.o 00:04:06.244 CC lib/ftl/utils/ftl_property.o 00:04:06.244 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:06.244 SYMLINK libspdk_iscsi.so 00:04:06.244 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:06.244 LIB libspdk_nvmf.a 00:04:06.501 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:06.501 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:06.501 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:06.501 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:06.501 SO libspdk_nvmf.so.20.0 00:04:06.501 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:06.501 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:06.501 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:06.759 LIB libspdk_vhost.a 00:04:06.759 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:06.759 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:06.759 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:06.759 SO libspdk_vhost.so.8.0 00:04:06.759 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:06.759 CC lib/ftl/base/ftl_base_dev.o 00:04:06.759 CC lib/ftl/base/ftl_base_bdev.o 00:04:06.759 CC lib/ftl/ftl_trace.o 00:04:06.759 SYMLINK libspdk_vhost.so 00:04:06.759 SYMLINK libspdk_nvmf.so 00:04:07.016 LIB libspdk_ftl.a 00:04:07.274 SO libspdk_ftl.so.9.0 00:04:07.531 SYMLINK libspdk_ftl.so 00:04:08.096 CC module/env_dpdk/env_dpdk_rpc.o 00:04:08.096 CC module/accel/ioat/accel_ioat.o 00:04:08.096 CC module/accel/dsa/accel_dsa.o 00:04:08.096 CC module/fsdev/aio/fsdev_aio.o 00:04:08.096 CC module/accel/error/accel_error.o 00:04:08.096 CC module/sock/posix/posix.o 00:04:08.096 CC module/keyring/file/keyring.o 00:04:08.096 CC module/blob/bdev/blob_bdev.o 00:04:08.096 CC module/accel/iaa/accel_iaa.o 00:04:08.096 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:08.096 LIB libspdk_env_dpdk_rpc.a 00:04:08.096 SO libspdk_env_dpdk_rpc.so.6.0 00:04:08.354 SYMLINK libspdk_env_dpdk_rpc.so 00:04:08.354 CC module/accel/iaa/accel_iaa_rpc.o 00:04:08.354 CC module/keyring/file/keyring_rpc.o 00:04:08.354 CC module/accel/ioat/accel_ioat_rpc.o 00:04:08.354 CC module/accel/error/accel_error_rpc.o 00:04:08.354 LIB libspdk_scheduler_dynamic.a 00:04:08.354 SO libspdk_scheduler_dynamic.so.4.0 00:04:08.354 LIB libspdk_accel_iaa.a 00:04:08.354 LIB libspdk_keyring_file.a 00:04:08.354 SO libspdk_accel_iaa.so.3.0 00:04:08.354 LIB libspdk_blob_bdev.a 00:04:08.354 SYMLINK libspdk_scheduler_dynamic.so 00:04:08.354 CC module/accel/dsa/accel_dsa_rpc.o 00:04:08.354 SO libspdk_keyring_file.so.2.0 00:04:08.611 SO libspdk_blob_bdev.so.11.0 00:04:08.611 SYMLINK libspdk_accel_iaa.so 00:04:08.611 LIB libspdk_accel_ioat.a 00:04:08.611 LIB libspdk_accel_error.a 00:04:08.611 SO libspdk_accel_ioat.so.6.0 00:04:08.611 SYMLINK libspdk_keyring_file.so 00:04:08.611 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:08.611 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:08.611 SYMLINK libspdk_blob_bdev.so 00:04:08.611 SO libspdk_accel_error.so.2.0 00:04:08.611 LIB libspdk_accel_dsa.a 00:04:08.611 SYMLINK libspdk_accel_ioat.so 00:04:08.611 SO libspdk_accel_dsa.so.5.0 00:04:08.611 CC module/fsdev/aio/linux_aio_mgr.o 00:04:08.611 SYMLINK libspdk_accel_error.so 00:04:08.611 CC module/keyring/linux/keyring.o 00:04:08.611 CC module/scheduler/gscheduler/gscheduler.o 00:04:08.611 SYMLINK libspdk_accel_dsa.so 00:04:08.868 LIB libspdk_scheduler_dpdk_governor.a 00:04:08.868 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:08.868 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:08.868 CC module/keyring/linux/keyring_rpc.o 00:04:08.868 LIB libspdk_scheduler_gscheduler.a 00:04:08.868 CC module/bdev/delay/vbdev_delay.o 00:04:08.868 SO libspdk_scheduler_gscheduler.so.4.0 00:04:08.868 CC module/bdev/gpt/gpt.o 00:04:08.868 CC module/bdev/error/vbdev_error.o 00:04:08.868 CC module/blobfs/bdev/blobfs_bdev.o 00:04:08.868 LIB libspdk_fsdev_aio.a 00:04:08.868 SYMLINK libspdk_scheduler_gscheduler.so 00:04:08.868 CC module/bdev/gpt/vbdev_gpt.o 00:04:09.126 SO libspdk_fsdev_aio.so.1.0 00:04:09.126 LIB libspdk_keyring_linux.a 00:04:09.126 CC module/bdev/lvol/vbdev_lvol.o 00:04:09.126 LIB libspdk_sock_posix.a 00:04:09.126 SO libspdk_keyring_linux.so.1.0 00:04:09.126 CC module/bdev/malloc/bdev_malloc.o 00:04:09.126 SO libspdk_sock_posix.so.6.0 00:04:09.126 SYMLINK libspdk_fsdev_aio.so 00:04:09.126 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:09.126 SYMLINK libspdk_keyring_linux.so 00:04:09.126 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:09.126 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:09.126 SYMLINK libspdk_sock_posix.so 00:04:09.126 CC module/bdev/error/vbdev_error_rpc.o 00:04:09.384 LIB libspdk_bdev_gpt.a 00:04:09.384 CC module/bdev/null/bdev_null.o 00:04:09.384 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:09.384 SO libspdk_bdev_gpt.so.6.0 00:04:09.384 LIB libspdk_blobfs_bdev.a 00:04:09.384 LIB libspdk_bdev_error.a 00:04:09.384 SO libspdk_blobfs_bdev.so.6.0 00:04:09.384 CC module/bdev/null/bdev_null_rpc.o 00:04:09.384 CC module/bdev/nvme/bdev_nvme.o 00:04:09.384 SO libspdk_bdev_error.so.6.0 00:04:09.384 SYMLINK libspdk_bdev_gpt.so 00:04:09.384 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:09.384 SYMLINK libspdk_blobfs_bdev.so 00:04:09.384 CC module/bdev/nvme/nvme_rpc.o 00:04:09.641 SYMLINK libspdk_bdev_error.so 00:04:09.641 LIB libspdk_bdev_delay.a 00:04:09.641 LIB libspdk_bdev_malloc.a 00:04:09.641 SO libspdk_bdev_delay.so.6.0 00:04:09.641 SO libspdk_bdev_malloc.so.6.0 00:04:09.641 CC module/bdev/nvme/bdev_mdns_client.o 00:04:09.641 SYMLINK libspdk_bdev_malloc.so 00:04:09.641 LIB libspdk_bdev_lvol.a 00:04:09.641 SYMLINK libspdk_bdev_delay.so 00:04:09.641 LIB libspdk_bdev_null.a 00:04:09.641 CC module/bdev/nvme/vbdev_opal.o 00:04:09.641 CC module/bdev/passthru/vbdev_passthru.o 00:04:09.641 SO libspdk_bdev_lvol.so.6.0 00:04:09.641 SO libspdk_bdev_null.so.6.0 00:04:09.641 CC module/bdev/raid/bdev_raid.o 00:04:09.898 CC module/bdev/raid/bdev_raid_rpc.o 00:04:09.898 SYMLINK libspdk_bdev_null.so 00:04:09.898 SYMLINK libspdk_bdev_lvol.so 00:04:09.898 CC module/bdev/raid/bdev_raid_sb.o 00:04:09.898 CC module/bdev/raid/raid0.o 00:04:09.898 CC module/bdev/raid/raid1.o 00:04:09.898 CC module/bdev/split/vbdev_split.o 00:04:09.898 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:10.156 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:10.156 CC module/bdev/raid/concat.o 00:04:10.156 CC module/bdev/split/vbdev_split_rpc.o 00:04:10.156 CC module/bdev/raid/raid5f.o 00:04:10.156 LIB libspdk_bdev_passthru.a 00:04:10.156 SO libspdk_bdev_passthru.so.6.0 00:04:10.414 SYMLINK libspdk_bdev_passthru.so 00:04:10.414 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:10.414 LIB libspdk_bdev_split.a 00:04:10.414 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:10.414 SO libspdk_bdev_split.so.6.0 00:04:10.414 CC module/bdev/aio/bdev_aio.o 00:04:10.414 SYMLINK libspdk_bdev_split.so 00:04:10.414 CC module/bdev/aio/bdev_aio_rpc.o 00:04:10.414 CC module/bdev/ftl/bdev_ftl.o 00:04:10.414 CC module/bdev/iscsi/bdev_iscsi.o 00:04:10.414 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:10.671 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:10.671 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:10.672 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:10.672 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:10.929 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:10.929 LIB libspdk_bdev_aio.a 00:04:10.929 SO libspdk_bdev_aio.so.6.0 00:04:10.929 SYMLINK libspdk_bdev_aio.so 00:04:10.929 LIB libspdk_bdev_zone_block.a 00:04:10.929 LIB libspdk_bdev_ftl.a 00:04:10.929 LIB libspdk_bdev_iscsi.a 00:04:10.929 SO libspdk_bdev_zone_block.so.6.0 00:04:10.929 SO libspdk_bdev_ftl.so.6.0 00:04:10.929 SO libspdk_bdev_iscsi.so.6.0 00:04:10.929 SYMLINK libspdk_bdev_zone_block.so 00:04:11.187 SYMLINK libspdk_bdev_iscsi.so 00:04:11.187 SYMLINK libspdk_bdev_ftl.so 00:04:11.187 LIB libspdk_bdev_raid.a 00:04:11.187 SO libspdk_bdev_raid.so.6.0 00:04:11.187 LIB libspdk_bdev_virtio.a 00:04:11.187 SO libspdk_bdev_virtio.so.6.0 00:04:11.187 SYMLINK libspdk_bdev_raid.so 00:04:11.444 SYMLINK libspdk_bdev_virtio.so 00:04:12.817 LIB libspdk_bdev_nvme.a 00:04:12.817 SO libspdk_bdev_nvme.so.7.1 00:04:12.817 SYMLINK libspdk_bdev_nvme.so 00:04:13.384 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:13.384 CC module/event/subsystems/sock/sock.o 00:04:13.384 CC module/event/subsystems/iobuf/iobuf.o 00:04:13.384 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:13.384 CC module/event/subsystems/scheduler/scheduler.o 00:04:13.384 CC module/event/subsystems/fsdev/fsdev.o 00:04:13.384 CC module/event/subsystems/vmd/vmd.o 00:04:13.384 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:13.384 CC module/event/subsystems/keyring/keyring.o 00:04:13.644 LIB libspdk_event_sock.a 00:04:13.644 LIB libspdk_event_fsdev.a 00:04:13.644 LIB libspdk_event_keyring.a 00:04:13.644 LIB libspdk_event_vhost_blk.a 00:04:13.644 LIB libspdk_event_scheduler.a 00:04:13.644 LIB libspdk_event_vmd.a 00:04:13.644 SO libspdk_event_sock.so.5.0 00:04:13.644 SO libspdk_event_fsdev.so.1.0 00:04:13.644 LIB libspdk_event_iobuf.a 00:04:13.644 SO libspdk_event_vhost_blk.so.3.0 00:04:13.644 SO libspdk_event_keyring.so.1.0 00:04:13.644 SO libspdk_event_scheduler.so.4.0 00:04:13.644 SO libspdk_event_vmd.so.6.0 00:04:13.644 SO libspdk_event_iobuf.so.3.0 00:04:13.644 SYMLINK libspdk_event_fsdev.so 00:04:13.644 SYMLINK libspdk_event_sock.so 00:04:13.644 SYMLINK libspdk_event_vhost_blk.so 00:04:13.644 SYMLINK libspdk_event_scheduler.so 00:04:13.644 SYMLINK libspdk_event_keyring.so 00:04:13.644 SYMLINK libspdk_event_vmd.so 00:04:13.644 SYMLINK libspdk_event_iobuf.so 00:04:13.902 CC module/event/subsystems/accel/accel.o 00:04:14.160 LIB libspdk_event_accel.a 00:04:14.160 SO libspdk_event_accel.so.6.0 00:04:14.160 SYMLINK libspdk_event_accel.so 00:04:14.419 CC module/event/subsystems/bdev/bdev.o 00:04:14.677 LIB libspdk_event_bdev.a 00:04:14.677 SO libspdk_event_bdev.so.6.0 00:04:14.935 SYMLINK libspdk_event_bdev.so 00:04:15.193 CC module/event/subsystems/ublk/ublk.o 00:04:15.193 CC module/event/subsystems/scsi/scsi.o 00:04:15.193 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:15.193 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:15.193 CC module/event/subsystems/nbd/nbd.o 00:04:15.193 LIB libspdk_event_ublk.a 00:04:15.193 LIB libspdk_event_nbd.a 00:04:15.193 LIB libspdk_event_scsi.a 00:04:15.193 SO libspdk_event_nbd.so.6.0 00:04:15.193 SO libspdk_event_ublk.so.3.0 00:04:15.453 SO libspdk_event_scsi.so.6.0 00:04:15.453 SYMLINK libspdk_event_nbd.so 00:04:15.453 SYMLINK libspdk_event_ublk.so 00:04:15.453 LIB libspdk_event_nvmf.a 00:04:15.453 SYMLINK libspdk_event_scsi.so 00:04:15.453 SO libspdk_event_nvmf.so.6.0 00:04:15.453 SYMLINK libspdk_event_nvmf.so 00:04:15.711 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:15.711 CC module/event/subsystems/iscsi/iscsi.o 00:04:15.970 LIB libspdk_event_vhost_scsi.a 00:04:15.970 LIB libspdk_event_iscsi.a 00:04:15.970 SO libspdk_event_vhost_scsi.so.3.0 00:04:15.970 SO libspdk_event_iscsi.so.6.0 00:04:15.970 SYMLINK libspdk_event_vhost_scsi.so 00:04:15.970 SYMLINK libspdk_event_iscsi.so 00:04:16.227 SO libspdk.so.6.0 00:04:16.227 SYMLINK libspdk.so 00:04:16.486 CC test/rpc_client/rpc_client_test.o 00:04:16.486 CXX app/trace/trace.o 00:04:16.486 TEST_HEADER include/spdk/accel.h 00:04:16.486 TEST_HEADER include/spdk/accel_module.h 00:04:16.486 TEST_HEADER include/spdk/assert.h 00:04:16.486 TEST_HEADER include/spdk/barrier.h 00:04:16.486 TEST_HEADER include/spdk/base64.h 00:04:16.486 TEST_HEADER include/spdk/bdev.h 00:04:16.486 TEST_HEADER include/spdk/bdev_module.h 00:04:16.486 TEST_HEADER include/spdk/bdev_zone.h 00:04:16.486 TEST_HEADER include/spdk/bit_array.h 00:04:16.486 TEST_HEADER include/spdk/bit_pool.h 00:04:16.486 TEST_HEADER include/spdk/blob_bdev.h 00:04:16.486 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:16.486 TEST_HEADER include/spdk/blobfs.h 00:04:16.486 TEST_HEADER include/spdk/blob.h 00:04:16.486 TEST_HEADER include/spdk/conf.h 00:04:16.486 TEST_HEADER include/spdk/config.h 00:04:16.486 TEST_HEADER include/spdk/cpuset.h 00:04:16.486 TEST_HEADER include/spdk/crc16.h 00:04:16.486 TEST_HEADER include/spdk/crc32.h 00:04:16.486 TEST_HEADER include/spdk/crc64.h 00:04:16.486 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:16.486 TEST_HEADER include/spdk/dif.h 00:04:16.486 TEST_HEADER include/spdk/dma.h 00:04:16.486 TEST_HEADER include/spdk/endian.h 00:04:16.486 TEST_HEADER include/spdk/env_dpdk.h 00:04:16.486 TEST_HEADER include/spdk/env.h 00:04:16.486 TEST_HEADER include/spdk/event.h 00:04:16.486 TEST_HEADER include/spdk/fd_group.h 00:04:16.486 TEST_HEADER include/spdk/fd.h 00:04:16.486 TEST_HEADER include/spdk/file.h 00:04:16.486 TEST_HEADER include/spdk/fsdev.h 00:04:16.486 TEST_HEADER include/spdk/fsdev_module.h 00:04:16.486 TEST_HEADER include/spdk/ftl.h 00:04:16.486 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:16.486 TEST_HEADER include/spdk/gpt_spec.h 00:04:16.486 CC examples/util/zipf/zipf.o 00:04:16.486 TEST_HEADER include/spdk/hexlify.h 00:04:16.486 TEST_HEADER include/spdk/histogram_data.h 00:04:16.486 CC examples/ioat/perf/perf.o 00:04:16.486 TEST_HEADER include/spdk/idxd.h 00:04:16.486 TEST_HEADER include/spdk/idxd_spec.h 00:04:16.486 CC test/thread/poller_perf/poller_perf.o 00:04:16.486 TEST_HEADER include/spdk/init.h 00:04:16.486 TEST_HEADER include/spdk/ioat.h 00:04:16.486 TEST_HEADER include/spdk/ioat_spec.h 00:04:16.486 TEST_HEADER include/spdk/iscsi_spec.h 00:04:16.486 TEST_HEADER include/spdk/json.h 00:04:16.486 TEST_HEADER include/spdk/jsonrpc.h 00:04:16.486 TEST_HEADER include/spdk/keyring.h 00:04:16.486 TEST_HEADER include/spdk/keyring_module.h 00:04:16.486 TEST_HEADER include/spdk/likely.h 00:04:16.486 TEST_HEADER include/spdk/log.h 00:04:16.486 TEST_HEADER include/spdk/lvol.h 00:04:16.486 TEST_HEADER include/spdk/md5.h 00:04:16.486 TEST_HEADER include/spdk/memory.h 00:04:16.486 TEST_HEADER include/spdk/mmio.h 00:04:16.486 TEST_HEADER include/spdk/nbd.h 00:04:16.486 TEST_HEADER include/spdk/net.h 00:04:16.486 TEST_HEADER include/spdk/notify.h 00:04:16.486 TEST_HEADER include/spdk/nvme.h 00:04:16.486 TEST_HEADER include/spdk/nvme_intel.h 00:04:16.486 CC test/dma/test_dma/test_dma.o 00:04:16.486 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:16.486 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:16.486 CC test/app/bdev_svc/bdev_svc.o 00:04:16.486 TEST_HEADER include/spdk/nvme_spec.h 00:04:16.486 TEST_HEADER include/spdk/nvme_zns.h 00:04:16.486 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:16.486 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:16.486 TEST_HEADER include/spdk/nvmf.h 00:04:16.486 TEST_HEADER include/spdk/nvmf_spec.h 00:04:16.486 TEST_HEADER include/spdk/nvmf_transport.h 00:04:16.486 TEST_HEADER include/spdk/opal.h 00:04:16.486 CC test/env/mem_callbacks/mem_callbacks.o 00:04:16.486 TEST_HEADER include/spdk/opal_spec.h 00:04:16.486 TEST_HEADER include/spdk/pci_ids.h 00:04:16.486 TEST_HEADER include/spdk/pipe.h 00:04:16.486 TEST_HEADER include/spdk/queue.h 00:04:16.486 TEST_HEADER include/spdk/reduce.h 00:04:16.486 TEST_HEADER include/spdk/rpc.h 00:04:16.486 TEST_HEADER include/spdk/scheduler.h 00:04:16.486 TEST_HEADER include/spdk/scsi.h 00:04:16.486 TEST_HEADER include/spdk/scsi_spec.h 00:04:16.486 TEST_HEADER include/spdk/sock.h 00:04:16.486 TEST_HEADER include/spdk/stdinc.h 00:04:16.744 TEST_HEADER include/spdk/string.h 00:04:16.744 TEST_HEADER include/spdk/thread.h 00:04:16.744 TEST_HEADER include/spdk/trace.h 00:04:16.744 TEST_HEADER include/spdk/trace_parser.h 00:04:16.744 TEST_HEADER include/spdk/tree.h 00:04:16.744 TEST_HEADER include/spdk/ublk.h 00:04:16.744 TEST_HEADER include/spdk/util.h 00:04:16.744 TEST_HEADER include/spdk/uuid.h 00:04:16.744 TEST_HEADER include/spdk/version.h 00:04:16.744 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:16.744 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:16.744 TEST_HEADER include/spdk/vhost.h 00:04:16.744 TEST_HEADER include/spdk/vmd.h 00:04:16.744 TEST_HEADER include/spdk/xor.h 00:04:16.744 TEST_HEADER include/spdk/zipf.h 00:04:16.744 CXX test/cpp_headers/accel.o 00:04:16.744 LINK rpc_client_test 00:04:16.744 LINK interrupt_tgt 00:04:16.744 LINK poller_perf 00:04:16.744 LINK zipf 00:04:16.744 LINK bdev_svc 00:04:16.744 LINK ioat_perf 00:04:16.744 CXX test/cpp_headers/accel_module.o 00:04:16.744 CXX test/cpp_headers/assert.o 00:04:17.002 LINK spdk_trace 00:04:17.002 CC test/app/histogram_perf/histogram_perf.o 00:04:17.002 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:17.002 CXX test/cpp_headers/barrier.o 00:04:17.002 CC test/event/event_perf/event_perf.o 00:04:17.002 CC examples/ioat/verify/verify.o 00:04:17.002 CC test/event/reactor/reactor.o 00:04:17.259 CC test/event/reactor_perf/reactor_perf.o 00:04:17.259 CC app/trace_record/trace_record.o 00:04:17.260 LINK test_dma 00:04:17.260 LINK histogram_perf 00:04:17.260 LINK event_perf 00:04:17.260 LINK reactor 00:04:17.260 CXX test/cpp_headers/base64.o 00:04:17.260 LINK mem_callbacks 00:04:17.260 LINK reactor_perf 00:04:17.260 LINK verify 00:04:17.517 CXX test/cpp_headers/bdev.o 00:04:17.517 CC app/nvmf_tgt/nvmf_main.o 00:04:17.517 LINK spdk_trace_record 00:04:17.517 CC test/env/vtophys/vtophys.o 00:04:17.517 LINK nvme_fuzz 00:04:17.517 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:17.517 CC test/event/app_repeat/app_repeat.o 00:04:17.517 CC examples/sock/hello_world/hello_sock.o 00:04:17.517 CC examples/vmd/lsvmd/lsvmd.o 00:04:17.517 CC examples/thread/thread/thread_ex.o 00:04:17.517 CXX test/cpp_headers/bdev_module.o 00:04:17.775 LINK nvmf_tgt 00:04:17.775 LINK vtophys 00:04:17.775 CC test/env/memory/memory_ut.o 00:04:17.775 LINK lsvmd 00:04:17.775 LINK env_dpdk_post_init 00:04:17.775 LINK app_repeat 00:04:17.775 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:17.775 CXX test/cpp_headers/bdev_zone.o 00:04:17.775 LINK hello_sock 00:04:17.775 LINK thread 00:04:18.032 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:18.032 CC test/app/jsoncat/jsoncat.o 00:04:18.032 CC examples/vmd/led/led.o 00:04:18.032 CC app/iscsi_tgt/iscsi_tgt.o 00:04:18.032 CC test/event/scheduler/scheduler.o 00:04:18.032 CXX test/cpp_headers/bit_array.o 00:04:18.032 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:18.290 LINK jsoncat 00:04:18.290 LINK led 00:04:18.290 CC test/env/pci/pci_ut.o 00:04:18.290 CC examples/idxd/perf/perf.o 00:04:18.290 LINK iscsi_tgt 00:04:18.290 CXX test/cpp_headers/bit_pool.o 00:04:18.290 CXX test/cpp_headers/blob_bdev.o 00:04:18.290 LINK scheduler 00:04:18.290 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.548 CXX test/cpp_headers/blobfs.o 00:04:18.548 CC app/spdk_tgt/spdk_tgt.o 00:04:18.548 CC examples/nvme/hello_world/hello_world.o 00:04:18.548 LINK vhost_fuzz 00:04:18.548 LINK idxd_perf 00:04:18.805 CXX test/cpp_headers/blob.o 00:04:18.805 LINK pci_ut 00:04:18.805 CC examples/accel/perf/accel_perf.o 00:04:18.805 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:18.805 LINK spdk_tgt 00:04:18.805 LINK hello_world 00:04:18.805 CXX test/cpp_headers/conf.o 00:04:19.063 CXX test/cpp_headers/config.o 00:04:19.063 CC examples/blob/hello_world/hello_blob.o 00:04:19.063 CC test/accel/dif/dif.o 00:04:19.063 LINK hello_fsdev 00:04:19.063 CXX test/cpp_headers/cpuset.o 00:04:19.063 CC examples/blob/cli/blobcli.o 00:04:19.063 LINK memory_ut 00:04:19.063 CC examples/nvme/reconnect/reconnect.o 00:04:19.063 CC app/spdk_lspci/spdk_lspci.o 00:04:19.320 LINK hello_blob 00:04:19.320 CXX test/cpp_headers/crc16.o 00:04:19.320 CXX test/cpp_headers/crc32.o 00:04:19.320 LINK spdk_lspci 00:04:19.320 LINK accel_perf 00:04:19.320 CC test/app/stub/stub.o 00:04:19.577 CXX test/cpp_headers/crc64.o 00:04:19.577 CC app/spdk_nvme_perf/perf.o 00:04:19.577 LINK reconnect 00:04:19.577 CC app/spdk_nvme_identify/identify.o 00:04:19.577 LINK stub 00:04:19.577 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:19.577 CC examples/nvme/arbitration/arbitration.o 00:04:19.577 CXX test/cpp_headers/dif.o 00:04:19.577 LINK blobcli 00:04:19.835 CXX test/cpp_headers/dma.o 00:04:19.835 CXX test/cpp_headers/endian.o 00:04:19.835 CXX test/cpp_headers/env_dpdk.o 00:04:19.835 LINK dif 00:04:19.835 CXX test/cpp_headers/env.o 00:04:20.093 LINK arbitration 00:04:20.093 LINK iscsi_fuzz 00:04:20.093 CC test/blobfs/mkfs/mkfs.o 00:04:20.093 CXX test/cpp_headers/event.o 00:04:20.093 CC test/lvol/esnap/esnap.o 00:04:20.093 CXX test/cpp_headers/fd_group.o 00:04:20.093 CC app/spdk_nvme_discover/discovery_aer.o 00:04:20.093 LINK nvme_manage 00:04:20.351 CC test/nvme/aer/aer.o 00:04:20.351 CXX test/cpp_headers/fd.o 00:04:20.351 LINK mkfs 00:04:20.351 LINK spdk_nvme_discover 00:04:20.351 CXX test/cpp_headers/file.o 00:04:20.351 CC examples/nvme/hotplug/hotplug.o 00:04:20.351 CC test/nvme/reset/reset.o 00:04:20.609 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.609 LINK spdk_nvme_identify 00:04:20.609 LINK aer 00:04:20.609 CC test/nvme/sgl/sgl.o 00:04:20.609 LINK spdk_nvme_perf 00:04:20.609 CXX test/cpp_headers/fsdev.o 00:04:20.609 CC test/nvme/e2edp/nvme_dp.o 00:04:20.866 LINK hotplug 00:04:20.866 CXX test/cpp_headers/fsdev_module.o 00:04:20.866 CXX test/cpp_headers/ftl.o 00:04:20.866 LINK hello_bdev 00:04:20.866 LINK reset 00:04:20.866 CC app/spdk_top/spdk_top.o 00:04:20.866 LINK sgl 00:04:20.866 CXX test/cpp_headers/fuse_dispatcher.o 00:04:20.866 CC app/vhost/vhost.o 00:04:21.124 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:21.124 LINK nvme_dp 00:04:21.124 CC examples/nvme/abort/abort.o 00:04:21.124 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.124 CC examples/bdev/bdevperf/bdevperf.o 00:04:21.124 CXX test/cpp_headers/gpt_spec.o 00:04:21.124 LINK vhost 00:04:21.124 LINK cmb_copy 00:04:21.385 CC test/nvme/overhead/overhead.o 00:04:21.385 LINK pmr_persistence 00:04:21.385 CXX test/cpp_headers/hexlify.o 00:04:21.385 CC test/bdev/bdevio/bdevio.o 00:04:21.385 CC test/nvme/err_injection/err_injection.o 00:04:21.385 CC test/nvme/startup/startup.o 00:04:21.385 CXX test/cpp_headers/histogram_data.o 00:04:21.645 LINK abort 00:04:21.645 CC test/nvme/reserve/reserve.o 00:04:21.645 LINK overhead 00:04:21.645 CXX test/cpp_headers/idxd.o 00:04:21.645 CXX test/cpp_headers/idxd_spec.o 00:04:21.645 LINK err_injection 00:04:21.645 LINK startup 00:04:21.645 LINK bdevio 00:04:21.645 CXX test/cpp_headers/init.o 00:04:21.903 LINK reserve 00:04:21.903 CXX test/cpp_headers/ioat.o 00:04:21.903 CXX test/cpp_headers/ioat_spec.o 00:04:21.903 CXX test/cpp_headers/iscsi_spec.o 00:04:21.903 CXX test/cpp_headers/json.o 00:04:21.903 CC test/nvme/simple_copy/simple_copy.o 00:04:21.903 CXX test/cpp_headers/jsonrpc.o 00:04:21.903 CXX test/cpp_headers/keyring.o 00:04:21.903 LINK spdk_top 00:04:22.161 CXX test/cpp_headers/keyring_module.o 00:04:22.161 CC test/nvme/connect_stress/connect_stress.o 00:04:22.161 LINK bdevperf 00:04:22.161 CC test/nvme/boot_partition/boot_partition.o 00:04:22.161 CC test/nvme/compliance/nvme_compliance.o 00:04:22.161 LINK simple_copy 00:04:22.161 CXX test/cpp_headers/likely.o 00:04:22.161 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:22.161 CC test/nvme/fused_ordering/fused_ordering.o 00:04:22.419 CC app/spdk_dd/spdk_dd.o 00:04:22.419 LINK boot_partition 00:04:22.419 LINK connect_stress 00:04:22.419 CXX test/cpp_headers/log.o 00:04:22.419 LINK doorbell_aers 00:04:22.419 LINK fused_ordering 00:04:22.419 CXX test/cpp_headers/lvol.o 00:04:22.419 CC examples/nvmf/nvmf/nvmf.o 00:04:22.677 CXX test/cpp_headers/md5.o 00:04:22.677 CC app/fio/nvme/fio_plugin.o 00:04:22.677 CC app/fio/bdev/fio_plugin.o 00:04:22.677 LINK nvme_compliance 00:04:22.677 CXX test/cpp_headers/memory.o 00:04:22.677 CC test/nvme/fdp/fdp.o 00:04:22.677 LINK spdk_dd 00:04:22.677 CC test/nvme/cuse/cuse.o 00:04:22.935 CXX test/cpp_headers/mmio.o 00:04:22.935 CXX test/cpp_headers/nbd.o 00:04:22.935 CXX test/cpp_headers/net.o 00:04:22.935 LINK nvmf 00:04:22.935 CXX test/cpp_headers/notify.o 00:04:22.935 CXX test/cpp_headers/nvme.o 00:04:22.935 CXX test/cpp_headers/nvme_intel.o 00:04:22.935 CXX test/cpp_headers/nvme_ocssd.o 00:04:23.193 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:23.193 CXX test/cpp_headers/nvme_spec.o 00:04:23.193 CXX test/cpp_headers/nvme_zns.o 00:04:23.193 LINK fdp 00:04:23.193 CXX test/cpp_headers/nvmf_cmd.o 00:04:23.193 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:23.193 LINK spdk_bdev 00:04:23.193 LINK spdk_nvme 00:04:23.193 CXX test/cpp_headers/nvmf.o 00:04:23.193 CXX test/cpp_headers/nvmf_spec.o 00:04:23.193 CXX test/cpp_headers/nvmf_transport.o 00:04:23.451 CXX test/cpp_headers/opal.o 00:04:23.451 CXX test/cpp_headers/opal_spec.o 00:04:23.451 CXX test/cpp_headers/pci_ids.o 00:04:23.451 CXX test/cpp_headers/pipe.o 00:04:23.451 CXX test/cpp_headers/queue.o 00:04:23.451 CXX test/cpp_headers/reduce.o 00:04:23.451 CXX test/cpp_headers/rpc.o 00:04:23.451 CXX test/cpp_headers/scheduler.o 00:04:23.451 CXX test/cpp_headers/scsi.o 00:04:23.451 CXX test/cpp_headers/scsi_spec.o 00:04:23.451 CXX test/cpp_headers/sock.o 00:04:23.451 CXX test/cpp_headers/stdinc.o 00:04:23.451 CXX test/cpp_headers/string.o 00:04:23.709 CXX test/cpp_headers/thread.o 00:04:23.709 CXX test/cpp_headers/trace.o 00:04:23.709 CXX test/cpp_headers/trace_parser.o 00:04:23.709 CXX test/cpp_headers/tree.o 00:04:23.709 CXX test/cpp_headers/ublk.o 00:04:23.709 CXX test/cpp_headers/util.o 00:04:23.709 CXX test/cpp_headers/uuid.o 00:04:23.709 CXX test/cpp_headers/version.o 00:04:23.709 CXX test/cpp_headers/vfio_user_pci.o 00:04:23.709 CXX test/cpp_headers/vfio_user_spec.o 00:04:23.709 CXX test/cpp_headers/vhost.o 00:04:23.967 CXX test/cpp_headers/vmd.o 00:04:23.967 CXX test/cpp_headers/xor.o 00:04:23.967 CXX test/cpp_headers/zipf.o 00:04:24.533 LINK cuse 00:04:27.063 LINK esnap 00:04:27.322 00:04:27.322 real 1m37.968s 00:04:27.322 user 9m4.192s 00:04:27.322 sys 1m49.937s 00:04:27.322 11:16:10 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:27.322 ************************************ 00:04:27.322 END TEST make 00:04:27.322 ************************************ 00:04:27.322 11:16:10 make -- common/autotest_common.sh@10 -- $ set +x 00:04:27.322 11:16:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:27.322 11:16:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:27.322 11:16:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:27.322 11:16:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.322 11:16:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:27.322 11:16:10 -- pm/common@44 -- $ pid=5247 00:04:27.322 11:16:10 -- pm/common@50 -- $ kill -TERM 5247 00:04:27.322 11:16:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.322 11:16:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:27.322 11:16:10 -- pm/common@44 -- $ pid=5248 00:04:27.322 11:16:10 -- pm/common@50 -- $ kill -TERM 5248 00:04:27.322 11:16:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:27.322 11:16:10 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:27.322 11:16:10 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.322 11:16:10 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.322 11:16:10 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.581 11:16:10 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.581 11:16:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.581 11:16:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.581 11:16:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.581 11:16:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.581 11:16:10 -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.581 11:16:10 -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.581 11:16:10 -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.581 11:16:10 -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.581 11:16:10 -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.581 11:16:10 -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.581 11:16:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.581 11:16:10 -- scripts/common.sh@344 -- # case "$op" in 00:04:27.581 11:16:10 -- scripts/common.sh@345 -- # : 1 00:04:27.581 11:16:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.581 11:16:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.581 11:16:10 -- scripts/common.sh@365 -- # decimal 1 00:04:27.581 11:16:10 -- scripts/common.sh@353 -- # local d=1 00:04:27.581 11:16:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.581 11:16:10 -- scripts/common.sh@355 -- # echo 1 00:04:27.581 11:16:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.581 11:16:10 -- scripts/common.sh@366 -- # decimal 2 00:04:27.581 11:16:10 -- scripts/common.sh@353 -- # local d=2 00:04:27.581 11:16:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.581 11:16:10 -- scripts/common.sh@355 -- # echo 2 00:04:27.581 11:16:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.581 11:16:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.581 11:16:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.581 11:16:10 -- scripts/common.sh@368 -- # return 0 00:04:27.581 11:16:10 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.581 11:16:10 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.581 --rc genhtml_branch_coverage=1 00:04:27.581 --rc genhtml_function_coverage=1 00:04:27.581 --rc genhtml_legend=1 00:04:27.581 --rc geninfo_all_blocks=1 00:04:27.581 --rc geninfo_unexecuted_blocks=1 00:04:27.581 00:04:27.581 ' 00:04:27.581 11:16:10 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.581 --rc genhtml_branch_coverage=1 00:04:27.581 --rc genhtml_function_coverage=1 00:04:27.581 --rc genhtml_legend=1 00:04:27.581 --rc geninfo_all_blocks=1 00:04:27.581 --rc geninfo_unexecuted_blocks=1 00:04:27.581 00:04:27.581 ' 00:04:27.581 11:16:10 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.581 --rc genhtml_branch_coverage=1 00:04:27.581 --rc genhtml_function_coverage=1 00:04:27.581 --rc genhtml_legend=1 00:04:27.581 --rc geninfo_all_blocks=1 00:04:27.581 --rc geninfo_unexecuted_blocks=1 00:04:27.581 00:04:27.581 ' 00:04:27.581 11:16:10 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.581 --rc genhtml_branch_coverage=1 00:04:27.581 --rc genhtml_function_coverage=1 00:04:27.581 --rc genhtml_legend=1 00:04:27.581 --rc geninfo_all_blocks=1 00:04:27.581 --rc geninfo_unexecuted_blocks=1 00:04:27.581 00:04:27.581 ' 00:04:27.581 11:16:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.581 11:16:10 -- nvmf/common.sh@7 -- # uname -s 00:04:27.581 11:16:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.581 11:16:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.581 11:16:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.581 11:16:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.581 11:16:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.581 11:16:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.581 11:16:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.581 11:16:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.581 11:16:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.581 11:16:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.581 11:16:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5560c3f1-84d4-440d-a043-db521604d4ff 00:04:27.581 11:16:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5560c3f1-84d4-440d-a043-db521604d4ff 00:04:27.581 11:16:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.581 11:16:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.581 11:16:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.581 11:16:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.581 11:16:10 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.581 11:16:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.581 11:16:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.581 11:16:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.581 11:16:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.581 11:16:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.581 11:16:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.581 11:16:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.581 11:16:10 -- paths/export.sh@5 -- # export PATH 00:04:27.581 11:16:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.581 11:16:10 -- nvmf/common.sh@51 -- # : 0 00:04:27.581 11:16:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.581 11:16:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.581 11:16:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.581 11:16:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.581 11:16:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.581 11:16:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.581 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.581 11:16:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.581 11:16:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.581 11:16:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.581 11:16:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:27.581 11:16:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:27.581 11:16:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:27.581 11:16:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:27.581 11:16:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.581 11:16:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:27.581 11:16:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.581 11:16:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:27.581 11:16:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:27.581 11:16:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:27.581 11:16:10 -- spdk/autotest.sh@48 -- # udevadm_pid=54328 00:04:27.581 11:16:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:27.581 11:16:10 -- pm/common@17 -- # local monitor 00:04:27.581 11:16:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.581 11:16:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.581 11:16:10 -- pm/common@25 -- # sleep 1 00:04:27.581 11:16:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:27.581 11:16:10 -- pm/common@21 -- # date +%s 00:04:27.581 11:16:10 -- pm/common@21 -- # date +%s 00:04:27.581 11:16:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731669370 00:04:27.582 11:16:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731669370 00:04:27.582 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731669370_collect-cpu-load.pm.log 00:04:27.582 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731669370_collect-vmstat.pm.log 00:04:28.518 11:16:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:28.518 11:16:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:28.518 11:16:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:28.518 11:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:28.518 11:16:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:28.518 11:16:11 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:28.518 11:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:28.518 11:16:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:28.518 11:16:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:28.518 11:16:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:28.518 11:16:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:28.518 11:16:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:28.518 11:16:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:28.518 11:16:11 -- common/autotest_common.sh@1455 -- # uname 00:04:28.518 11:16:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:28.518 11:16:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:28.518 11:16:11 -- common/autotest_common.sh@1475 -- # uname 00:04:28.518 11:16:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:28.518 11:16:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:28.518 11:16:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:28.777 lcov: LCOV version 1.15 00:04:28.777 11:16:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:43.655 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:43.655 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:58.533 11:16:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:58.533 11:16:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.533 11:16:41 -- common/autotest_common.sh@10 -- # set +x 00:04:58.533 11:16:41 -- spdk/autotest.sh@78 -- # rm -f 00:04:58.533 11:16:41 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.357 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:59.357 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:59.357 11:16:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:59.357 11:16:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:59.357 11:16:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:59.357 11:16:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:59.357 11:16:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:59.357 11:16:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:59.357 11:16:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:59.357 11:16:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:59.357 11:16:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:59.357 11:16:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:59.357 11:16:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:59.357 11:16:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:59.357 11:16:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:59.357 11:16:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:59.357 11:16:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:59.357 11:16:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:59.357 11:16:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:59.357 11:16:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:59.357 11:16:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:59.357 11:16:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:59.357 11:16:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:59.357 11:16:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:59.357 11:16:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:59.357 11:16:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:59.357 11:16:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:59.357 11:16:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.357 11:16:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:59.357 11:16:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:59.357 11:16:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:59.357 11:16:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:59.357 No valid GPT data, bailing 00:04:59.357 11:16:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:59.357 11:16:42 -- scripts/common.sh@394 -- # pt= 00:04:59.357 11:16:42 -- scripts/common.sh@395 -- # return 1 00:04:59.357 11:16:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:59.357 1+0 records in 00:04:59.357 1+0 records out 00:04:59.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434416 s, 241 MB/s 00:04:59.357 11:16:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.357 11:16:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:59.357 11:16:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:59.357 11:16:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:59.357 11:16:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:59.357 No valid GPT data, bailing 00:04:59.357 11:16:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:59.357 11:16:42 -- scripts/common.sh@394 -- # pt= 00:04:59.357 11:16:42 -- scripts/common.sh@395 -- # return 1 00:04:59.357 11:16:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:59.357 1+0 records in 00:04:59.357 1+0 records out 00:04:59.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468634 s, 224 MB/s 00:04:59.357 11:16:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.357 11:16:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:59.357 11:16:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:59.357 11:16:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:59.357 11:16:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:59.616 No valid GPT data, bailing 00:04:59.616 11:16:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:59.616 11:16:42 -- scripts/common.sh@394 -- # pt= 00:04:59.616 11:16:42 -- scripts/common.sh@395 -- # return 1 00:04:59.616 11:16:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:59.616 1+0 records in 00:04:59.616 1+0 records out 00:04:59.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477388 s, 220 MB/s 00:04:59.616 11:16:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.616 11:16:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:59.616 11:16:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:59.616 11:16:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:59.616 11:16:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:59.616 No valid GPT data, bailing 00:04:59.616 11:16:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:59.616 11:16:42 -- scripts/common.sh@394 -- # pt= 00:04:59.616 11:16:42 -- scripts/common.sh@395 -- # return 1 00:04:59.616 11:16:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:59.616 1+0 records in 00:04:59.616 1+0 records out 00:04:59.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467966 s, 224 MB/s 00:04:59.616 11:16:42 -- spdk/autotest.sh@105 -- # sync 00:04:59.873 11:16:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:59.873 11:16:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:59.873 11:16:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:01.777 11:16:44 -- spdk/autotest.sh@111 -- # uname -s 00:05:01.777 11:16:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:01.777 11:16:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:01.777 11:16:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.343 Hugepages 00:05:02.343 node hugesize free / total 00:05:02.343 node0 1048576kB 0 / 0 00:05:02.343 node0 2048kB 0 / 0 00:05:02.343 00:05:02.343 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.343 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.601 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.601 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:02.601 11:16:45 -- spdk/autotest.sh@117 -- # uname -s 00:05:02.601 11:16:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:02.601 11:16:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:02.601 11:16:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.427 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.427 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.427 11:16:46 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:04.801 11:16:47 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:04.801 11:16:47 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:04.801 11:16:47 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.801 11:16:47 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:04.801 11:16:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:04.801 11:16:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:04.801 11:16:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.801 11:16:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.801 11:16:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:04.801 11:16:47 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:04.801 11:16:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:04.801 11:16:47 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.801 Waiting for block devices as requested 00:05:05.060 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.060 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.060 11:16:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:05.060 11:16:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:05.060 11:16:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:05.060 11:16:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:05.060 11:16:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:05.060 11:16:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:05.060 11:16:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:05.060 11:16:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:05.060 11:16:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:05.060 11:16:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:05.060 11:16:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:05.060 11:16:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:05.060 11:16:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:05.060 11:16:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:05.060 11:16:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:05.060 11:16:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:05.060 11:16:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:05.060 11:16:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:05.060 11:16:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:05.060 11:16:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:05.060 11:16:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:05.060 11:16:47 -- common/autotest_common.sh@1541 -- # continue 00:05:05.060 11:16:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:05.060 11:16:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:05.060 11:16:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:05.060 11:16:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:05.060 11:16:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:05.060 11:16:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:05.060 11:16:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:05.060 11:16:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:05.060 11:16:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:05.060 11:16:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:05.060 11:16:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:05.060 11:16:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:05.060 11:16:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:05.318 11:16:48 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:05.318 11:16:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:05.318 11:16:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:05.318 11:16:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:05.318 11:16:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:05.318 11:16:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:05.318 11:16:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:05.318 11:16:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:05.318 11:16:48 -- common/autotest_common.sh@1541 -- # continue 00:05:05.318 11:16:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:05.318 11:16:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.318 11:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:05.318 11:16:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:05.318 11:16:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.318 11:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:05.318 11:16:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.885 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.145 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.145 11:16:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:06.145 11:16:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.145 11:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:06.145 11:16:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:06.145 11:16:48 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:06.145 11:16:48 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:06.145 11:16:48 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:06.145 11:16:48 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:06.145 11:16:48 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:06.145 11:16:48 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:06.145 11:16:48 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:06.145 11:16:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:06.145 11:16:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:06.145 11:16:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.145 11:16:48 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:06.145 11:16:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:06.145 11:16:49 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:06.145 11:16:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:06.145 11:16:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:06.145 11:16:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:06.145 11:16:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:06.145 11:16:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:06.145 11:16:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:06.145 11:16:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:06.145 11:16:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:06.145 11:16:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:06.145 11:16:49 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:06.145 11:16:49 -- common/autotest_common.sh@1570 -- # return 0 00:05:06.145 11:16:49 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:06.145 11:16:49 -- common/autotest_common.sh@1578 -- # return 0 00:05:06.145 11:16:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:06.145 11:16:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:06.145 11:16:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:06.145 11:16:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:06.145 11:16:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:06.145 11:16:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.145 11:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:06.145 11:16:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:06.145 11:16:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:06.145 11:16:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.145 11:16:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.145 11:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:06.145 ************************************ 00:05:06.145 START TEST env 00:05:06.145 ************************************ 00:05:06.145 11:16:49 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:06.404 * Looking for test storage... 00:05:06.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.404 11:16:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.404 11:16:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.404 11:16:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.404 11:16:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.404 11:16:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.404 11:16:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.404 11:16:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.404 11:16:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.404 11:16:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.404 11:16:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.404 11:16:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.404 11:16:49 env -- scripts/common.sh@344 -- # case "$op" in 00:05:06.404 11:16:49 env -- scripts/common.sh@345 -- # : 1 00:05:06.404 11:16:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.404 11:16:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.404 11:16:49 env -- scripts/common.sh@365 -- # decimal 1 00:05:06.404 11:16:49 env -- scripts/common.sh@353 -- # local d=1 00:05:06.404 11:16:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.404 11:16:49 env -- scripts/common.sh@355 -- # echo 1 00:05:06.404 11:16:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.404 11:16:49 env -- scripts/common.sh@366 -- # decimal 2 00:05:06.404 11:16:49 env -- scripts/common.sh@353 -- # local d=2 00:05:06.404 11:16:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.404 11:16:49 env -- scripts/common.sh@355 -- # echo 2 00:05:06.404 11:16:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.404 11:16:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.404 11:16:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.404 11:16:49 env -- scripts/common.sh@368 -- # return 0 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.404 --rc genhtml_branch_coverage=1 00:05:06.404 --rc genhtml_function_coverage=1 00:05:06.404 --rc genhtml_legend=1 00:05:06.404 --rc geninfo_all_blocks=1 00:05:06.404 --rc geninfo_unexecuted_blocks=1 00:05:06.404 00:05:06.404 ' 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.404 --rc genhtml_branch_coverage=1 00:05:06.404 --rc genhtml_function_coverage=1 00:05:06.404 --rc genhtml_legend=1 00:05:06.404 --rc geninfo_all_blocks=1 00:05:06.404 --rc geninfo_unexecuted_blocks=1 00:05:06.404 00:05:06.404 ' 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.404 --rc genhtml_branch_coverage=1 00:05:06.404 --rc genhtml_function_coverage=1 00:05:06.404 --rc genhtml_legend=1 00:05:06.404 --rc geninfo_all_blocks=1 00:05:06.404 --rc geninfo_unexecuted_blocks=1 00:05:06.404 00:05:06.404 ' 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.404 --rc genhtml_branch_coverage=1 00:05:06.404 --rc genhtml_function_coverage=1 00:05:06.404 --rc genhtml_legend=1 00:05:06.404 --rc geninfo_all_blocks=1 00:05:06.404 --rc geninfo_unexecuted_blocks=1 00:05:06.404 00:05:06.404 ' 00:05:06.404 11:16:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.404 11:16:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.404 11:16:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.404 ************************************ 00:05:06.404 START TEST env_memory 00:05:06.404 ************************************ 00:05:06.404 11:16:49 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:06.404 00:05:06.404 00:05:06.404 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.404 http://cunit.sourceforge.net/ 00:05:06.404 00:05:06.404 00:05:06.404 Suite: memory 00:05:06.404 Test: alloc and free memory map ...[2024-11-15 11:16:49.341388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.663 passed 00:05:06.663 Test: mem map translation ...[2024-11-15 11:16:49.404125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.663 [2024-11-15 11:16:49.404226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.663 [2024-11-15 11:16:49.404319] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.663 [2024-11-15 11:16:49.404351] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.663 passed 00:05:06.663 Test: mem map registration ...[2024-11-15 11:16:49.502839] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:06.663 [2024-11-15 11:16:49.502919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:06.663 passed 00:05:06.922 Test: mem map adjacent registrations ...passed 00:05:06.922 00:05:06.922 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.922 suites 1 1 n/a 0 0 00:05:06.922 tests 4 4 4 0 0 00:05:06.923 asserts 152 152 152 0 n/a 00:05:06.923 00:05:06.923 Elapsed time = 0.347 seconds 00:05:06.923 00:05:06.923 real 0m0.390s 00:05:06.923 user 0m0.350s 00:05:06.923 sys 0m0.029s 00:05:06.923 11:16:49 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.923 11:16:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:06.923 ************************************ 00:05:06.923 END TEST env_memory 00:05:06.923 ************************************ 00:05:06.923 11:16:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.923 11:16:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.923 11:16:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.923 11:16:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.923 ************************************ 00:05:06.923 START TEST env_vtophys 00:05:06.923 ************************************ 00:05:06.923 11:16:49 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.923 EAL: lib.eal log level changed from notice to debug 00:05:06.923 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 1 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 2 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 3 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 4 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 5 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 6 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 7 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 8 as core 0 on socket 0 00:05:06.923 EAL: Detected lcore 9 as core 0 on socket 0 00:05:06.923 EAL: Maximum logical cores by configuration: 128 00:05:06.923 EAL: Detected CPU lcores: 10 00:05:06.923 EAL: Detected NUMA nodes: 1 00:05:06.923 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:06.923 EAL: Detected shared linkage of DPDK 00:05:06.923 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.923 EAL: Selected IOVA mode 'PA' 00:05:06.923 EAL: Probing VFIO support... 00:05:06.923 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.923 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:06.923 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.923 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.923 EAL: Setting up physically contiguous memory... 00:05:06.923 EAL: Setting maximum number of open files to 524288 00:05:06.923 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.923 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.923 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.923 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.923 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.923 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.923 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.923 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.923 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.923 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.923 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.923 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.923 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.923 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.923 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.923 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.923 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.923 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.923 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.923 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.923 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.923 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.923 EAL: Hugepages will be freed exactly as allocated. 00:05:06.923 EAL: No shared files mode enabled, IPC is disabled 00:05:06.923 EAL: No shared files mode enabled, IPC is disabled 00:05:07.182 EAL: TSC frequency is ~2200000 KHz 00:05:07.182 EAL: Main lcore 0 is ready (tid=7fb39a2e4a40;cpuset=[0]) 00:05:07.182 EAL: Trying to obtain current memory policy. 00:05:07.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.182 EAL: Restoring previous memory policy: 0 00:05:07.182 EAL: request: mp_malloc_sync 00:05:07.182 EAL: No shared files mode enabled, IPC is disabled 00:05:07.182 EAL: Heap on socket 0 was expanded by 2MB 00:05:07.182 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:07.182 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:07.182 EAL: Mem event callback 'spdk:(nil)' registered 00:05:07.182 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:07.182 00:05:07.182 00:05:07.182 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.182 http://cunit.sourceforge.net/ 00:05:07.182 00:05:07.182 00:05:07.182 Suite: components_suite 00:05:07.750 Test: vtophys_malloc_test ...passed 00:05:07.750 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:07.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.750 EAL: Restoring previous memory policy: 4 00:05:07.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.750 EAL: request: mp_malloc_sync 00:05:07.750 EAL: No shared files mode enabled, IPC is disabled 00:05:07.750 EAL: Heap on socket 0 was expanded by 4MB 00:05:07.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.750 EAL: request: mp_malloc_sync 00:05:07.750 EAL: No shared files mode enabled, IPC is disabled 00:05:07.750 EAL: Heap on socket 0 was shrunk by 4MB 00:05:07.750 EAL: Trying to obtain current memory policy. 00:05:07.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.750 EAL: Restoring previous memory policy: 4 00:05:07.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.750 EAL: request: mp_malloc_sync 00:05:07.750 EAL: No shared files mode enabled, IPC is disabled 00:05:07.750 EAL: Heap on socket 0 was expanded by 6MB 00:05:07.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.750 EAL: request: mp_malloc_sync 00:05:07.750 EAL: No shared files mode enabled, IPC is disabled 00:05:07.750 EAL: Heap on socket 0 was shrunk by 6MB 00:05:07.750 EAL: Trying to obtain current memory policy. 00:05:07.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.750 EAL: Restoring previous memory policy: 4 00:05:07.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.750 EAL: request: mp_malloc_sync 00:05:07.750 EAL: No shared files mode enabled, IPC is disabled 00:05:07.750 EAL: Heap on socket 0 was expanded by 10MB 00:05:07.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.750 EAL: request: mp_malloc_sync 00:05:07.750 EAL: No shared files mode enabled, IPC is disabled 00:05:07.750 EAL: Heap on socket 0 was shrunk by 10MB 00:05:07.750 EAL: Trying to obtain current memory policy. 00:05:07.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.750 EAL: Restoring previous memory policy: 4 00:05:07.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.750 EAL: request: mp_malloc_sync 00:05:07.750 EAL: No shared files mode enabled, IPC is disabled 00:05:07.750 EAL: Heap on socket 0 was expanded by 18MB 00:05:07.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.750 EAL: request: mp_malloc_sync 00:05:07.750 EAL: No shared files mode enabled, IPC is disabled 00:05:07.750 EAL: Heap on socket 0 was shrunk by 18MB 00:05:07.750 EAL: Trying to obtain current memory policy. 00:05:07.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.751 EAL: Restoring previous memory policy: 4 00:05:07.751 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.751 EAL: request: mp_malloc_sync 00:05:07.751 EAL: No shared files mode enabled, IPC is disabled 00:05:07.751 EAL: Heap on socket 0 was expanded by 34MB 00:05:07.751 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.751 EAL: request: mp_malloc_sync 00:05:07.751 EAL: No shared files mode enabled, IPC is disabled 00:05:07.751 EAL: Heap on socket 0 was shrunk by 34MB 00:05:07.751 EAL: Trying to obtain current memory policy. 00:05:07.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.751 EAL: Restoring previous memory policy: 4 00:05:07.751 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.751 EAL: request: mp_malloc_sync 00:05:07.751 EAL: No shared files mode enabled, IPC is disabled 00:05:07.751 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.028 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.028 EAL: request: mp_malloc_sync 00:05:08.028 EAL: No shared files mode enabled, IPC is disabled 00:05:08.028 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.028 EAL: Trying to obtain current memory policy. 00:05:08.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.028 EAL: Restoring previous memory policy: 4 00:05:08.028 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.028 EAL: request: mp_malloc_sync 00:05:08.028 EAL: No shared files mode enabled, IPC is disabled 00:05:08.028 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.292 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.292 EAL: request: mp_malloc_sync 00:05:08.292 EAL: No shared files mode enabled, IPC is disabled 00:05:08.292 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.551 EAL: Trying to obtain current memory policy. 00:05:08.551 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.551 EAL: Restoring previous memory policy: 4 00:05:08.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.551 EAL: request: mp_malloc_sync 00:05:08.551 EAL: No shared files mode enabled, IPC is disabled 00:05:08.551 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.117 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.117 EAL: request: mp_malloc_sync 00:05:09.117 EAL: No shared files mode enabled, IPC is disabled 00:05:09.117 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.375 EAL: Trying to obtain current memory policy. 00:05:09.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.632 EAL: Restoring previous memory policy: 4 00:05:09.632 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.632 EAL: request: mp_malloc_sync 00:05:09.632 EAL: No shared files mode enabled, IPC is disabled 00:05:09.632 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.199 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.457 EAL: request: mp_malloc_sync 00:05:10.457 EAL: No shared files mode enabled, IPC is disabled 00:05:10.457 EAL: Heap on socket 0 was shrunk by 514MB 00:05:11.024 EAL: Trying to obtain current memory policy. 00:05:11.024 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.592 EAL: Restoring previous memory policy: 4 00:05:11.592 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.592 EAL: request: mp_malloc_sync 00:05:11.592 EAL: No shared files mode enabled, IPC is disabled 00:05:11.592 EAL: Heap on socket 0 was expanded by 1026MB 00:05:12.979 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.237 EAL: request: mp_malloc_sync 00:05:13.237 EAL: No shared files mode enabled, IPC is disabled 00:05:13.237 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:14.610 passed 00:05:14.610 00:05:14.610 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.610 suites 1 1 n/a 0 0 00:05:14.610 tests 2 2 2 0 0 00:05:14.610 asserts 5775 5775 5775 0 n/a 00:05:14.610 00:05:14.610 Elapsed time = 7.284 seconds 00:05:14.610 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.610 EAL: request: mp_malloc_sync 00:05:14.610 EAL: No shared files mode enabled, IPC is disabled 00:05:14.610 EAL: Heap on socket 0 was shrunk by 2MB 00:05:14.610 EAL: No shared files mode enabled, IPC is disabled 00:05:14.610 EAL: No shared files mode enabled, IPC is disabled 00:05:14.610 EAL: No shared files mode enabled, IPC is disabled 00:05:14.610 00:05:14.610 real 0m7.638s 00:05:14.610 user 0m6.240s 00:05:14.610 sys 0m1.229s 00:05:14.610 11:16:57 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.610 11:16:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:14.610 ************************************ 00:05:14.610 END TEST env_vtophys 00:05:14.610 ************************************ 00:05:14.610 11:16:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:14.610 11:16:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.610 11:16:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.610 11:16:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.610 ************************************ 00:05:14.610 START TEST env_pci 00:05:14.610 ************************************ 00:05:14.610 11:16:57 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:14.610 00:05:14.610 00:05:14.610 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.610 http://cunit.sourceforge.net/ 00:05:14.610 00:05:14.610 00:05:14.610 Suite: pci 00:05:14.610 Test: pci_hook ...[2024-11-15 11:16:57.433663] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56598 has claimed it 00:05:14.610 passed 00:05:14.610 00:05:14.610 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.610 suites 1 1 n/a 0 0 00:05:14.610 tests 1 1 1 0 0 00:05:14.610 asserts 25 25 25 0 n/a 00:05:14.610 00:05:14.610 Elapsed time = 0.008 secondsEAL: Cannot find device (10000:00:01.0) 00:05:14.610 EAL: Failed to attach device on primary process 00:05:14.610 00:05:14.610 00:05:14.610 real 0m0.081s 00:05:14.610 user 0m0.031s 00:05:14.610 sys 0m0.050s 00:05:14.610 11:16:57 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.610 ************************************ 00:05:14.610 END TEST env_pci 00:05:14.610 ************************************ 00:05:14.610 11:16:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:14.610 11:16:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:14.610 11:16:57 env -- env/env.sh@15 -- # uname 00:05:14.610 11:16:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:14.610 11:16:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:14.610 11:16:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.611 11:16:57 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:14.611 11:16:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.611 11:16:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.611 ************************************ 00:05:14.611 START TEST env_dpdk_post_init 00:05:14.611 ************************************ 00:05:14.611 11:16:57 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.869 EAL: Detected CPU lcores: 10 00:05:14.869 EAL: Detected NUMA nodes: 1 00:05:14.869 EAL: Detected shared linkage of DPDK 00:05:14.869 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.869 EAL: Selected IOVA mode 'PA' 00:05:14.869 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.869 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:14.869 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:14.869 Starting DPDK initialization... 00:05:14.869 Starting SPDK post initialization... 00:05:14.869 SPDK NVMe probe 00:05:14.869 Attaching to 0000:00:10.0 00:05:14.869 Attaching to 0000:00:11.0 00:05:14.869 Attached to 0000:00:10.0 00:05:14.869 Attached to 0000:00:11.0 00:05:14.869 Cleaning up... 00:05:15.127 00:05:15.127 real 0m0.288s 00:05:15.127 user 0m0.101s 00:05:15.127 sys 0m0.087s 00:05:15.127 11:16:57 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.127 11:16:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.127 ************************************ 00:05:15.127 END TEST env_dpdk_post_init 00:05:15.127 ************************************ 00:05:15.127 11:16:57 env -- env/env.sh@26 -- # uname 00:05:15.127 11:16:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.127 11:16:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.127 11:16:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.127 11:16:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.127 11:16:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.127 ************************************ 00:05:15.127 START TEST env_mem_callbacks 00:05:15.127 ************************************ 00:05:15.127 11:16:57 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.127 EAL: Detected CPU lcores: 10 00:05:15.127 EAL: Detected NUMA nodes: 1 00:05:15.127 EAL: Detected shared linkage of DPDK 00:05:15.127 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.127 EAL: Selected IOVA mode 'PA' 00:05:15.127 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.127 00:05:15.127 00:05:15.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.127 http://cunit.sourceforge.net/ 00:05:15.127 00:05:15.127 00:05:15.127 Suite: memory 00:05:15.127 Test: test ... 00:05:15.127 register 0x200000200000 2097152 00:05:15.127 malloc 3145728 00:05:15.127 register 0x200000400000 4194304 00:05:15.127 buf 0x2000004fffc0 len 3145728 PASSED 00:05:15.127 malloc 64 00:05:15.127 buf 0x2000004ffec0 len 64 PASSED 00:05:15.127 malloc 4194304 00:05:15.127 register 0x200000800000 6291456 00:05:15.127 buf 0x2000009fffc0 len 4194304 PASSED 00:05:15.127 free 0x2000004fffc0 3145728 00:05:15.127 free 0x2000004ffec0 64 00:05:15.127 unregister 0x200000400000 4194304 PASSED 00:05:15.386 free 0x2000009fffc0 4194304 00:05:15.386 unregister 0x200000800000 6291456 PASSED 00:05:15.386 malloc 8388608 00:05:15.386 register 0x200000400000 10485760 00:05:15.386 buf 0x2000005fffc0 len 8388608 PASSED 00:05:15.386 free 0x2000005fffc0 8388608 00:05:15.386 unregister 0x200000400000 10485760 PASSED 00:05:15.386 passed 00:05:15.386 00:05:15.386 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.386 suites 1 1 n/a 0 0 00:05:15.386 tests 1 1 1 0 0 00:05:15.386 asserts 15 15 15 0 n/a 00:05:15.386 00:05:15.386 Elapsed time = 0.051 seconds 00:05:15.386 00:05:15.386 real 0m0.242s 00:05:15.386 user 0m0.083s 00:05:15.386 sys 0m0.057s 00:05:15.386 11:16:58 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.386 11:16:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.386 ************************************ 00:05:15.386 END TEST env_mem_callbacks 00:05:15.386 ************************************ 00:05:15.386 ************************************ 00:05:15.386 END TEST env 00:05:15.386 ************************************ 00:05:15.386 00:05:15.386 real 0m9.106s 00:05:15.386 user 0m7.000s 00:05:15.386 sys 0m1.708s 00:05:15.386 11:16:58 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.386 11:16:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.386 11:16:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:15.386 11:16:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.386 11:16:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.386 11:16:58 -- common/autotest_common.sh@10 -- # set +x 00:05:15.386 ************************************ 00:05:15.386 START TEST rpc 00:05:15.386 ************************************ 00:05:15.386 11:16:58 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:15.386 * Looking for test storage... 00:05:15.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.386 11:16:58 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.386 11:16:58 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.386 11:16:58 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.645 11:16:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.645 11:16:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.645 11:16:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.645 11:16:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.645 11:16:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.645 11:16:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.645 11:16:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.645 11:16:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.645 11:16:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.645 11:16:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.645 11:16:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.645 11:16:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.645 11:16:58 rpc -- scripts/common.sh@345 -- # : 1 00:05:15.645 11:16:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.645 11:16:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.645 11:16:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.645 11:16:58 rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.645 11:16:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.645 11:16:58 rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.645 11:16:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.645 11:16:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.645 11:16:58 rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.645 11:16:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.645 11:16:58 rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.645 11:16:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.645 11:16:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.645 11:16:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.645 11:16:58 rpc -- scripts/common.sh@368 -- # return 0 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.645 --rc genhtml_branch_coverage=1 00:05:15.645 --rc genhtml_function_coverage=1 00:05:15.645 --rc genhtml_legend=1 00:05:15.645 --rc geninfo_all_blocks=1 00:05:15.645 --rc geninfo_unexecuted_blocks=1 00:05:15.645 00:05:15.645 ' 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.645 --rc genhtml_branch_coverage=1 00:05:15.645 --rc genhtml_function_coverage=1 00:05:15.645 --rc genhtml_legend=1 00:05:15.645 --rc geninfo_all_blocks=1 00:05:15.645 --rc geninfo_unexecuted_blocks=1 00:05:15.645 00:05:15.645 ' 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.645 --rc genhtml_branch_coverage=1 00:05:15.645 --rc genhtml_function_coverage=1 00:05:15.645 --rc genhtml_legend=1 00:05:15.645 --rc geninfo_all_blocks=1 00:05:15.645 --rc geninfo_unexecuted_blocks=1 00:05:15.645 00:05:15.645 ' 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.645 --rc genhtml_branch_coverage=1 00:05:15.645 --rc genhtml_function_coverage=1 00:05:15.645 --rc genhtml_legend=1 00:05:15.645 --rc geninfo_all_blocks=1 00:05:15.645 --rc geninfo_unexecuted_blocks=1 00:05:15.645 00:05:15.645 ' 00:05:15.645 11:16:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56725 00:05:15.645 11:16:58 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:15.645 11:16:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.645 11:16:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56725 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@833 -- # '[' -z 56725 ']' 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.645 11:16:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.645 [2024-11-15 11:16:58.588637] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:15.645 [2024-11-15 11:16:58.588906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56725 ] 00:05:15.910 [2024-11-15 11:16:58.777662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.168 [2024-11-15 11:16:58.906115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.168 [2024-11-15 11:16:58.906239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56725' to capture a snapshot of events at runtime. 00:05:16.168 [2024-11-15 11:16:58.906258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.168 [2024-11-15 11:16:58.906274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.168 [2024-11-15 11:16:58.906285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56725 for offline analysis/debug. 00:05:16.168 [2024-11-15 11:16:58.907773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.148 11:16:59 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.148 11:16:59 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:17.148 11:16:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.148 11:16:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.148 11:16:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:17.148 11:16:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:17.148 11:16:59 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.148 11:16:59 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.148 11:16:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 ************************************ 00:05:17.149 START TEST rpc_integrity 00:05:17.149 ************************************ 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.149 { 00:05:17.149 "name": "Malloc0", 00:05:17.149 "aliases": [ 00:05:17.149 "e417027d-c983-42a5-921d-1fea2854c69e" 00:05:17.149 ], 00:05:17.149 "product_name": "Malloc disk", 00:05:17.149 "block_size": 512, 00:05:17.149 "num_blocks": 16384, 00:05:17.149 "uuid": "e417027d-c983-42a5-921d-1fea2854c69e", 00:05:17.149 "assigned_rate_limits": { 00:05:17.149 "rw_ios_per_sec": 0, 00:05:17.149 "rw_mbytes_per_sec": 0, 00:05:17.149 "r_mbytes_per_sec": 0, 00:05:17.149 "w_mbytes_per_sec": 0 00:05:17.149 }, 00:05:17.149 "claimed": false, 00:05:17.149 "zoned": false, 00:05:17.149 "supported_io_types": { 00:05:17.149 "read": true, 00:05:17.149 "write": true, 00:05:17.149 "unmap": true, 00:05:17.149 "flush": true, 00:05:17.149 "reset": true, 00:05:17.149 "nvme_admin": false, 00:05:17.149 "nvme_io": false, 00:05:17.149 "nvme_io_md": false, 00:05:17.149 "write_zeroes": true, 00:05:17.149 "zcopy": true, 00:05:17.149 "get_zone_info": false, 00:05:17.149 "zone_management": false, 00:05:17.149 "zone_append": false, 00:05:17.149 "compare": false, 00:05:17.149 "compare_and_write": false, 00:05:17.149 "abort": true, 00:05:17.149 "seek_hole": false, 00:05:17.149 "seek_data": false, 00:05:17.149 "copy": true, 00:05:17.149 "nvme_iov_md": false 00:05:17.149 }, 00:05:17.149 "memory_domains": [ 00:05:17.149 { 00:05:17.149 "dma_device_id": "system", 00:05:17.149 "dma_device_type": 1 00:05:17.149 }, 00:05:17.149 { 00:05:17.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.149 "dma_device_type": 2 00:05:17.149 } 00:05:17.149 ], 00:05:17.149 "driver_specific": {} 00:05:17.149 } 00:05:17.149 ]' 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 [2024-11-15 11:16:59.964402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:17.149 [2024-11-15 11:16:59.964506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.149 [2024-11-15 11:16:59.964584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:17.149 [2024-11-15 11:16:59.964623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.149 [2024-11-15 11:16:59.967983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.149 [2024-11-15 11:16:59.968063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.149 Passthru0 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 11:16:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.149 11:16:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.149 { 00:05:17.149 "name": "Malloc0", 00:05:17.149 "aliases": [ 00:05:17.149 "e417027d-c983-42a5-921d-1fea2854c69e" 00:05:17.149 ], 00:05:17.149 "product_name": "Malloc disk", 00:05:17.149 "block_size": 512, 00:05:17.149 "num_blocks": 16384, 00:05:17.149 "uuid": "e417027d-c983-42a5-921d-1fea2854c69e", 00:05:17.149 "assigned_rate_limits": { 00:05:17.149 "rw_ios_per_sec": 0, 00:05:17.149 "rw_mbytes_per_sec": 0, 00:05:17.149 "r_mbytes_per_sec": 0, 00:05:17.149 "w_mbytes_per_sec": 0 00:05:17.149 }, 00:05:17.149 "claimed": true, 00:05:17.149 "claim_type": "exclusive_write", 00:05:17.149 "zoned": false, 00:05:17.149 "supported_io_types": { 00:05:17.149 "read": true, 00:05:17.149 "write": true, 00:05:17.149 "unmap": true, 00:05:17.149 "flush": true, 00:05:17.149 "reset": true, 00:05:17.149 "nvme_admin": false, 00:05:17.149 "nvme_io": false, 00:05:17.149 "nvme_io_md": false, 00:05:17.149 "write_zeroes": true, 00:05:17.149 "zcopy": true, 00:05:17.149 "get_zone_info": false, 00:05:17.149 "zone_management": false, 00:05:17.149 "zone_append": false, 00:05:17.149 "compare": false, 00:05:17.149 "compare_and_write": false, 00:05:17.149 "abort": true, 00:05:17.149 "seek_hole": false, 00:05:17.149 "seek_data": false, 00:05:17.149 "copy": true, 00:05:17.149 "nvme_iov_md": false 00:05:17.149 }, 00:05:17.149 "memory_domains": [ 00:05:17.149 { 00:05:17.149 "dma_device_id": "system", 00:05:17.149 "dma_device_type": 1 00:05:17.149 }, 00:05:17.149 { 00:05:17.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.149 "dma_device_type": 2 00:05:17.149 } 00:05:17.149 ], 00:05:17.149 "driver_specific": {} 00:05:17.149 }, 00:05:17.149 { 00:05:17.149 "name": "Passthru0", 00:05:17.149 "aliases": [ 00:05:17.149 "40bacf43-c30c-5e2a-b94b-f7fefa72db17" 00:05:17.149 ], 00:05:17.149 "product_name": "passthru", 00:05:17.149 "block_size": 512, 00:05:17.149 "num_blocks": 16384, 00:05:17.149 "uuid": "40bacf43-c30c-5e2a-b94b-f7fefa72db17", 00:05:17.149 "assigned_rate_limits": { 00:05:17.149 "rw_ios_per_sec": 0, 00:05:17.149 "rw_mbytes_per_sec": 0, 00:05:17.149 "r_mbytes_per_sec": 0, 00:05:17.149 "w_mbytes_per_sec": 0 00:05:17.149 }, 00:05:17.149 "claimed": false, 00:05:17.149 "zoned": false, 00:05:17.149 "supported_io_types": { 00:05:17.149 "read": true, 00:05:17.149 "write": true, 00:05:17.149 "unmap": true, 00:05:17.149 "flush": true, 00:05:17.149 "reset": true, 00:05:17.149 "nvme_admin": false, 00:05:17.149 "nvme_io": false, 00:05:17.149 "nvme_io_md": false, 00:05:17.149 "write_zeroes": true, 00:05:17.149 "zcopy": true, 00:05:17.150 "get_zone_info": false, 00:05:17.150 "zone_management": false, 00:05:17.150 "zone_append": false, 00:05:17.150 "compare": false, 00:05:17.150 "compare_and_write": false, 00:05:17.150 "abort": true, 00:05:17.150 "seek_hole": false, 00:05:17.150 "seek_data": false, 00:05:17.150 "copy": true, 00:05:17.150 "nvme_iov_md": false 00:05:17.150 }, 00:05:17.150 "memory_domains": [ 00:05:17.150 { 00:05:17.150 "dma_device_id": "system", 00:05:17.150 "dma_device_type": 1 00:05:17.150 }, 00:05:17.150 { 00:05:17.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.150 "dma_device_type": 2 00:05:17.150 } 00:05:17.150 ], 00:05:17.150 "driver_specific": { 00:05:17.150 "passthru": { 00:05:17.150 "name": "Passthru0", 00:05:17.150 "base_bdev_name": "Malloc0" 00:05:17.150 } 00:05:17.150 } 00:05:17.150 } 00:05:17.150 ]' 00:05:17.150 11:17:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.150 11:17:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.150 11:17:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.150 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.150 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.150 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.150 11:17:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.150 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.150 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.150 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.150 11:17:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.150 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.150 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.409 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.409 11:17:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.409 11:17:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.409 11:17:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.409 00:05:17.409 real 0m0.339s 00:05:17.409 user 0m0.207s 00:05:17.409 sys 0m0.041s 00:05:17.409 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.409 11:17:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.409 ************************************ 00:05:17.409 END TEST rpc_integrity 00:05:17.409 ************************************ 00:05:17.409 11:17:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.409 11:17:00 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.409 11:17:00 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.409 11:17:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.409 ************************************ 00:05:17.409 START TEST rpc_plugins 00:05:17.409 ************************************ 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:17.409 { 00:05:17.409 "name": "Malloc1", 00:05:17.409 "aliases": [ 00:05:17.409 "d839f695-1770-4b6d-8b60-dc632c9a6144" 00:05:17.409 ], 00:05:17.409 "product_name": "Malloc disk", 00:05:17.409 "block_size": 4096, 00:05:17.409 "num_blocks": 256, 00:05:17.409 "uuid": "d839f695-1770-4b6d-8b60-dc632c9a6144", 00:05:17.409 "assigned_rate_limits": { 00:05:17.409 "rw_ios_per_sec": 0, 00:05:17.409 "rw_mbytes_per_sec": 0, 00:05:17.409 "r_mbytes_per_sec": 0, 00:05:17.409 "w_mbytes_per_sec": 0 00:05:17.409 }, 00:05:17.409 "claimed": false, 00:05:17.409 "zoned": false, 00:05:17.409 "supported_io_types": { 00:05:17.409 "read": true, 00:05:17.409 "write": true, 00:05:17.409 "unmap": true, 00:05:17.409 "flush": true, 00:05:17.409 "reset": true, 00:05:17.409 "nvme_admin": false, 00:05:17.409 "nvme_io": false, 00:05:17.409 "nvme_io_md": false, 00:05:17.409 "write_zeroes": true, 00:05:17.409 "zcopy": true, 00:05:17.409 "get_zone_info": false, 00:05:17.409 "zone_management": false, 00:05:17.409 "zone_append": false, 00:05:17.409 "compare": false, 00:05:17.409 "compare_and_write": false, 00:05:17.409 "abort": true, 00:05:17.409 "seek_hole": false, 00:05:17.409 "seek_data": false, 00:05:17.409 "copy": true, 00:05:17.409 "nvme_iov_md": false 00:05:17.409 }, 00:05:17.409 "memory_domains": [ 00:05:17.409 { 00:05:17.409 "dma_device_id": "system", 00:05:17.409 "dma_device_type": 1 00:05:17.409 }, 00:05:17.409 { 00:05:17.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.409 "dma_device_type": 2 00:05:17.409 } 00:05:17.409 ], 00:05:17.409 "driver_specific": {} 00:05:17.409 } 00:05:17.409 ]' 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.409 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.409 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:17.668 11:17:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.668 00:05:17.668 real 0m0.171s 00:05:17.668 user 0m0.108s 00:05:17.668 sys 0m0.019s 00:05:17.668 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.668 11:17:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.668 ************************************ 00:05:17.668 END TEST rpc_plugins 00:05:17.668 ************************************ 00:05:17.668 11:17:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.668 11:17:00 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.668 11:17:00 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.668 11:17:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.668 ************************************ 00:05:17.668 START TEST rpc_trace_cmd_test 00:05:17.668 ************************************ 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:17.668 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56725", 00:05:17.668 "tpoint_group_mask": "0x8", 00:05:17.668 "iscsi_conn": { 00:05:17.668 "mask": "0x2", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "scsi": { 00:05:17.668 "mask": "0x4", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "bdev": { 00:05:17.668 "mask": "0x8", 00:05:17.668 "tpoint_mask": "0xffffffffffffffff" 00:05:17.668 }, 00:05:17.668 "nvmf_rdma": { 00:05:17.668 "mask": "0x10", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "nvmf_tcp": { 00:05:17.668 "mask": "0x20", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "ftl": { 00:05:17.668 "mask": "0x40", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "blobfs": { 00:05:17.668 "mask": "0x80", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "dsa": { 00:05:17.668 "mask": "0x200", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "thread": { 00:05:17.668 "mask": "0x400", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "nvme_pcie": { 00:05:17.668 "mask": "0x800", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "iaa": { 00:05:17.668 "mask": "0x1000", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "nvme_tcp": { 00:05:17.668 "mask": "0x2000", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "bdev_nvme": { 00:05:17.668 "mask": "0x4000", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "sock": { 00:05:17.668 "mask": "0x8000", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "blob": { 00:05:17.668 "mask": "0x10000", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "bdev_raid": { 00:05:17.668 "mask": "0x20000", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 }, 00:05:17.668 "scheduler": { 00:05:17.668 "mask": "0x40000", 00:05:17.668 "tpoint_mask": "0x0" 00:05:17.668 } 00:05:17.668 }' 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.668 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.927 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.927 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.927 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.927 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.927 11:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.927 00:05:17.927 real 0m0.288s 00:05:17.927 user 0m0.246s 00:05:17.927 sys 0m0.031s 00:05:17.927 11:17:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.927 ************************************ 00:05:17.927 11:17:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.927 END TEST rpc_trace_cmd_test 00:05:17.927 ************************************ 00:05:17.927 11:17:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.927 11:17:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.927 11:17:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.927 11:17:00 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.927 11:17:00 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.927 11:17:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.927 ************************************ 00:05:17.927 START TEST rpc_daemon_integrity 00:05:17.927 ************************************ 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.927 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.186 { 00:05:18.186 "name": "Malloc2", 00:05:18.186 "aliases": [ 00:05:18.186 "cb0b7f07-350f-4601-83fd-35dc17596d31" 00:05:18.186 ], 00:05:18.186 "product_name": "Malloc disk", 00:05:18.186 "block_size": 512, 00:05:18.186 "num_blocks": 16384, 00:05:18.186 "uuid": "cb0b7f07-350f-4601-83fd-35dc17596d31", 00:05:18.186 "assigned_rate_limits": { 00:05:18.186 "rw_ios_per_sec": 0, 00:05:18.186 "rw_mbytes_per_sec": 0, 00:05:18.186 "r_mbytes_per_sec": 0, 00:05:18.186 "w_mbytes_per_sec": 0 00:05:18.186 }, 00:05:18.186 "claimed": false, 00:05:18.186 "zoned": false, 00:05:18.186 "supported_io_types": { 00:05:18.186 "read": true, 00:05:18.186 "write": true, 00:05:18.186 "unmap": true, 00:05:18.186 "flush": true, 00:05:18.186 "reset": true, 00:05:18.186 "nvme_admin": false, 00:05:18.186 "nvme_io": false, 00:05:18.186 "nvme_io_md": false, 00:05:18.186 "write_zeroes": true, 00:05:18.186 "zcopy": true, 00:05:18.186 "get_zone_info": false, 00:05:18.186 "zone_management": false, 00:05:18.186 "zone_append": false, 00:05:18.186 "compare": false, 00:05:18.186 "compare_and_write": false, 00:05:18.186 "abort": true, 00:05:18.186 "seek_hole": false, 00:05:18.186 "seek_data": false, 00:05:18.186 "copy": true, 00:05:18.186 "nvme_iov_md": false 00:05:18.186 }, 00:05:18.186 "memory_domains": [ 00:05:18.186 { 00:05:18.186 "dma_device_id": "system", 00:05:18.186 "dma_device_type": 1 00:05:18.186 }, 00:05:18.186 { 00:05:18.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.186 "dma_device_type": 2 00:05:18.186 } 00:05:18.186 ], 00:05:18.186 "driver_specific": {} 00:05:18.186 } 00:05:18.186 ]' 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.186 [2024-11-15 11:17:00.953812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:18.186 [2024-11-15 11:17:00.953912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.186 [2024-11-15 11:17:00.953942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:18.186 [2024-11-15 11:17:00.953960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.186 [2024-11-15 11:17:00.957112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.186 [2024-11-15 11:17:00.957203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.186 Passthru0 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.186 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.186 { 00:05:18.186 "name": "Malloc2", 00:05:18.186 "aliases": [ 00:05:18.186 "cb0b7f07-350f-4601-83fd-35dc17596d31" 00:05:18.186 ], 00:05:18.186 "product_name": "Malloc disk", 00:05:18.186 "block_size": 512, 00:05:18.186 "num_blocks": 16384, 00:05:18.186 "uuid": "cb0b7f07-350f-4601-83fd-35dc17596d31", 00:05:18.187 "assigned_rate_limits": { 00:05:18.187 "rw_ios_per_sec": 0, 00:05:18.187 "rw_mbytes_per_sec": 0, 00:05:18.187 "r_mbytes_per_sec": 0, 00:05:18.187 "w_mbytes_per_sec": 0 00:05:18.187 }, 00:05:18.187 "claimed": true, 00:05:18.187 "claim_type": "exclusive_write", 00:05:18.187 "zoned": false, 00:05:18.187 "supported_io_types": { 00:05:18.187 "read": true, 00:05:18.187 "write": true, 00:05:18.187 "unmap": true, 00:05:18.187 "flush": true, 00:05:18.187 "reset": true, 00:05:18.187 "nvme_admin": false, 00:05:18.187 "nvme_io": false, 00:05:18.187 "nvme_io_md": false, 00:05:18.187 "write_zeroes": true, 00:05:18.187 "zcopy": true, 00:05:18.187 "get_zone_info": false, 00:05:18.187 "zone_management": false, 00:05:18.187 "zone_append": false, 00:05:18.187 "compare": false, 00:05:18.187 "compare_and_write": false, 00:05:18.187 "abort": true, 00:05:18.187 "seek_hole": false, 00:05:18.187 "seek_data": false, 00:05:18.187 "copy": true, 00:05:18.187 "nvme_iov_md": false 00:05:18.187 }, 00:05:18.187 "memory_domains": [ 00:05:18.187 { 00:05:18.187 "dma_device_id": "system", 00:05:18.187 "dma_device_type": 1 00:05:18.187 }, 00:05:18.187 { 00:05:18.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.187 "dma_device_type": 2 00:05:18.187 } 00:05:18.187 ], 00:05:18.187 "driver_specific": {} 00:05:18.187 }, 00:05:18.187 { 00:05:18.187 "name": "Passthru0", 00:05:18.187 "aliases": [ 00:05:18.187 "6bf3230f-abab-5d28-a516-b59e852b3bdc" 00:05:18.187 ], 00:05:18.187 "product_name": "passthru", 00:05:18.187 "block_size": 512, 00:05:18.187 "num_blocks": 16384, 00:05:18.187 "uuid": "6bf3230f-abab-5d28-a516-b59e852b3bdc", 00:05:18.187 "assigned_rate_limits": { 00:05:18.187 "rw_ios_per_sec": 0, 00:05:18.187 "rw_mbytes_per_sec": 0, 00:05:18.187 "r_mbytes_per_sec": 0, 00:05:18.187 "w_mbytes_per_sec": 0 00:05:18.187 }, 00:05:18.187 "claimed": false, 00:05:18.187 "zoned": false, 00:05:18.187 "supported_io_types": { 00:05:18.187 "read": true, 00:05:18.187 "write": true, 00:05:18.187 "unmap": true, 00:05:18.187 "flush": true, 00:05:18.187 "reset": true, 00:05:18.187 "nvme_admin": false, 00:05:18.187 "nvme_io": false, 00:05:18.187 "nvme_io_md": false, 00:05:18.187 "write_zeroes": true, 00:05:18.187 "zcopy": true, 00:05:18.187 "get_zone_info": false, 00:05:18.187 "zone_management": false, 00:05:18.187 "zone_append": false, 00:05:18.187 "compare": false, 00:05:18.187 "compare_and_write": false, 00:05:18.187 "abort": true, 00:05:18.187 "seek_hole": false, 00:05:18.187 "seek_data": false, 00:05:18.187 "copy": true, 00:05:18.187 "nvme_iov_md": false 00:05:18.187 }, 00:05:18.187 "memory_domains": [ 00:05:18.187 { 00:05:18.187 "dma_device_id": "system", 00:05:18.187 "dma_device_type": 1 00:05:18.187 }, 00:05:18.187 { 00:05:18.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.187 "dma_device_type": 2 00:05:18.187 } 00:05:18.187 ], 00:05:18.187 "driver_specific": { 00:05:18.187 "passthru": { 00:05:18.187 "name": "Passthru0", 00:05:18.187 "base_bdev_name": "Malloc2" 00:05:18.187 } 00:05:18.187 } 00:05:18.187 } 00:05:18.187 ]' 00:05:18.187 11:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.187 11:17:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.445 11:17:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.445 00:05:18.445 real 0m0.362s 00:05:18.445 user 0m0.219s 00:05:18.445 sys 0m0.046s 00:05:18.445 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.445 ************************************ 00:05:18.445 11:17:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.445 END TEST rpc_daemon_integrity 00:05:18.445 ************************************ 00:05:18.445 11:17:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.445 11:17:01 rpc -- rpc/rpc.sh@84 -- # killprocess 56725 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@952 -- # '[' -z 56725 ']' 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@956 -- # kill -0 56725 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@957 -- # uname 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56725 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.445 killing process with pid 56725 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56725' 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@971 -- # kill 56725 00:05:18.445 11:17:01 rpc -- common/autotest_common.sh@976 -- # wait 56725 00:05:20.980 00:05:20.980 real 0m5.132s 00:05:20.980 user 0m5.702s 00:05:20.980 sys 0m1.053s 00:05:20.980 11:17:03 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.980 11:17:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.980 ************************************ 00:05:20.980 END TEST rpc 00:05:20.980 ************************************ 00:05:20.980 11:17:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:20.980 11:17:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.980 11:17:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.980 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.980 ************************************ 00:05:20.980 START TEST skip_rpc 00:05:20.980 ************************************ 00:05:20.980 11:17:03 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:20.980 * Looking for test storage... 00:05:20.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.980 11:17:03 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.980 11:17:03 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.980 11:17:03 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.980 11:17:03 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.980 11:17:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.980 11:17:03 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.980 11:17:03 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.980 --rc genhtml_branch_coverage=1 00:05:20.980 --rc genhtml_function_coverage=1 00:05:20.980 --rc genhtml_legend=1 00:05:20.980 --rc geninfo_all_blocks=1 00:05:20.980 --rc geninfo_unexecuted_blocks=1 00:05:20.980 00:05:20.980 ' 00:05:20.980 11:17:03 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.980 --rc genhtml_branch_coverage=1 00:05:20.980 --rc genhtml_function_coverage=1 00:05:20.980 --rc genhtml_legend=1 00:05:20.980 --rc geninfo_all_blocks=1 00:05:20.981 --rc geninfo_unexecuted_blocks=1 00:05:20.981 00:05:20.981 ' 00:05:20.981 11:17:03 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.981 --rc genhtml_branch_coverage=1 00:05:20.981 --rc genhtml_function_coverage=1 00:05:20.981 --rc genhtml_legend=1 00:05:20.981 --rc geninfo_all_blocks=1 00:05:20.981 --rc geninfo_unexecuted_blocks=1 00:05:20.981 00:05:20.981 ' 00:05:20.981 11:17:03 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.981 --rc genhtml_branch_coverage=1 00:05:20.981 --rc genhtml_function_coverage=1 00:05:20.981 --rc genhtml_legend=1 00:05:20.981 --rc geninfo_all_blocks=1 00:05:20.981 --rc geninfo_unexecuted_blocks=1 00:05:20.981 00:05:20.981 ' 00:05:20.981 11:17:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.981 11:17:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:20.981 11:17:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:20.981 11:17:03 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.981 11:17:03 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.981 11:17:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.981 ************************************ 00:05:20.981 START TEST skip_rpc 00:05:20.981 ************************************ 00:05:20.981 11:17:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:20.981 11:17:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56954 00:05:20.981 11:17:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.981 11:17:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:20.981 11:17:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:20.981 [2024-11-15 11:17:03.752043] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:20.981 [2024-11-15 11:17:03.752268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56954 ] 00:05:21.240 [2024-11-15 11:17:03.941306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.240 [2024-11-15 11:17:04.089122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56954 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56954 ']' 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56954 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56954 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56954' 00:05:26.507 killing process with pid 56954 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56954 00:05:26.507 11:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56954 00:05:28.410 00:05:28.410 real 0m7.257s 00:05:28.410 user 0m6.556s 00:05:28.410 sys 0m0.589s 00:05:28.410 11:17:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.410 11:17:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 ************************************ 00:05:28.410 END TEST skip_rpc 00:05:28.410 ************************************ 00:05:28.410 11:17:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.410 11:17:10 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.410 11:17:10 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.410 11:17:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 ************************************ 00:05:28.410 START TEST skip_rpc_with_json 00:05:28.410 ************************************ 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57058 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57058 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57058 ']' 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.410 11:17:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 [2024-11-15 11:17:11.055800] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:28.410 [2024-11-15 11:17:11.056015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57058 ] 00:05:28.410 [2024-11-15 11:17:11.231016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.668 [2024-11-15 11:17:11.359413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.604 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.604 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:29.604 11:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:29.604 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.604 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.604 [2024-11-15 11:17:12.269460] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:29.604 request: 00:05:29.604 { 00:05:29.604 "trtype": "tcp", 00:05:29.604 "method": "nvmf_get_transports", 00:05:29.605 "req_id": 1 00:05:29.605 } 00:05:29.605 Got JSON-RPC error response 00:05:29.605 response: 00:05:29.605 { 00:05:29.605 "code": -19, 00:05:29.605 "message": "No such device" 00:05:29.605 } 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.605 [2024-11-15 11:17:12.277629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.605 11:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.605 { 00:05:29.605 "subsystems": [ 00:05:29.605 { 00:05:29.605 "subsystem": "fsdev", 00:05:29.605 "config": [ 00:05:29.605 { 00:05:29.605 "method": "fsdev_set_opts", 00:05:29.605 "params": { 00:05:29.605 "fsdev_io_pool_size": 65535, 00:05:29.605 "fsdev_io_cache_size": 256 00:05:29.605 } 00:05:29.605 } 00:05:29.605 ] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "keyring", 00:05:29.605 "config": [] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "iobuf", 00:05:29.605 "config": [ 00:05:29.605 { 00:05:29.605 "method": "iobuf_set_options", 00:05:29.605 "params": { 00:05:29.605 "small_pool_count": 8192, 00:05:29.605 "large_pool_count": 1024, 00:05:29.605 "small_bufsize": 8192, 00:05:29.605 "large_bufsize": 135168, 00:05:29.605 "enable_numa": false 00:05:29.605 } 00:05:29.605 } 00:05:29.605 ] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "sock", 00:05:29.605 "config": [ 00:05:29.605 { 00:05:29.605 "method": "sock_set_default_impl", 00:05:29.605 "params": { 00:05:29.605 "impl_name": "posix" 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "sock_impl_set_options", 00:05:29.605 "params": { 00:05:29.605 "impl_name": "ssl", 00:05:29.605 "recv_buf_size": 4096, 00:05:29.605 "send_buf_size": 4096, 00:05:29.605 "enable_recv_pipe": true, 00:05:29.605 "enable_quickack": false, 00:05:29.605 "enable_placement_id": 0, 00:05:29.605 "enable_zerocopy_send_server": true, 00:05:29.605 "enable_zerocopy_send_client": false, 00:05:29.605 "zerocopy_threshold": 0, 00:05:29.605 "tls_version": 0, 00:05:29.605 "enable_ktls": false 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "sock_impl_set_options", 00:05:29.605 "params": { 00:05:29.605 "impl_name": "posix", 00:05:29.605 "recv_buf_size": 2097152, 00:05:29.605 "send_buf_size": 2097152, 00:05:29.605 "enable_recv_pipe": true, 00:05:29.605 "enable_quickack": false, 00:05:29.605 "enable_placement_id": 0, 00:05:29.605 "enable_zerocopy_send_server": true, 00:05:29.605 "enable_zerocopy_send_client": false, 00:05:29.605 "zerocopy_threshold": 0, 00:05:29.605 "tls_version": 0, 00:05:29.605 "enable_ktls": false 00:05:29.605 } 00:05:29.605 } 00:05:29.605 ] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "vmd", 00:05:29.605 "config": [] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "accel", 00:05:29.605 "config": [ 00:05:29.605 { 00:05:29.605 "method": "accel_set_options", 00:05:29.605 "params": { 00:05:29.605 "small_cache_size": 128, 00:05:29.605 "large_cache_size": 16, 00:05:29.605 "task_count": 2048, 00:05:29.605 "sequence_count": 2048, 00:05:29.605 "buf_count": 2048 00:05:29.605 } 00:05:29.605 } 00:05:29.605 ] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "bdev", 00:05:29.605 "config": [ 00:05:29.605 { 00:05:29.605 "method": "bdev_set_options", 00:05:29.605 "params": { 00:05:29.605 "bdev_io_pool_size": 65535, 00:05:29.605 "bdev_io_cache_size": 256, 00:05:29.605 "bdev_auto_examine": true, 00:05:29.605 "iobuf_small_cache_size": 128, 00:05:29.605 "iobuf_large_cache_size": 16 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "bdev_raid_set_options", 00:05:29.605 "params": { 00:05:29.605 "process_window_size_kb": 1024, 00:05:29.605 "process_max_bandwidth_mb_sec": 0 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "bdev_iscsi_set_options", 00:05:29.605 "params": { 00:05:29.605 "timeout_sec": 30 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "bdev_nvme_set_options", 00:05:29.605 "params": { 00:05:29.605 "action_on_timeout": "none", 00:05:29.605 "timeout_us": 0, 00:05:29.605 "timeout_admin_us": 0, 00:05:29.605 "keep_alive_timeout_ms": 10000, 00:05:29.605 "arbitration_burst": 0, 00:05:29.605 "low_priority_weight": 0, 00:05:29.605 "medium_priority_weight": 0, 00:05:29.605 "high_priority_weight": 0, 00:05:29.605 "nvme_adminq_poll_period_us": 10000, 00:05:29.605 "nvme_ioq_poll_period_us": 0, 00:05:29.605 "io_queue_requests": 0, 00:05:29.605 "delay_cmd_submit": true, 00:05:29.605 "transport_retry_count": 4, 00:05:29.605 "bdev_retry_count": 3, 00:05:29.605 "transport_ack_timeout": 0, 00:05:29.605 "ctrlr_loss_timeout_sec": 0, 00:05:29.605 "reconnect_delay_sec": 0, 00:05:29.605 "fast_io_fail_timeout_sec": 0, 00:05:29.605 "disable_auto_failback": false, 00:05:29.605 "generate_uuids": false, 00:05:29.605 "transport_tos": 0, 00:05:29.605 "nvme_error_stat": false, 00:05:29.605 "rdma_srq_size": 0, 00:05:29.605 "io_path_stat": false, 00:05:29.605 "allow_accel_sequence": false, 00:05:29.605 "rdma_max_cq_size": 0, 00:05:29.605 "rdma_cm_event_timeout_ms": 0, 00:05:29.605 "dhchap_digests": [ 00:05:29.605 "sha256", 00:05:29.605 "sha384", 00:05:29.605 "sha512" 00:05:29.605 ], 00:05:29.605 "dhchap_dhgroups": [ 00:05:29.605 "null", 00:05:29.605 "ffdhe2048", 00:05:29.605 "ffdhe3072", 00:05:29.605 "ffdhe4096", 00:05:29.605 "ffdhe6144", 00:05:29.605 "ffdhe8192" 00:05:29.605 ] 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "bdev_nvme_set_hotplug", 00:05:29.605 "params": { 00:05:29.605 "period_us": 100000, 00:05:29.605 "enable": false 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "bdev_wait_for_examine" 00:05:29.605 } 00:05:29.605 ] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "scsi", 00:05:29.605 "config": null 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "scheduler", 00:05:29.605 "config": [ 00:05:29.605 { 00:05:29.605 "method": "framework_set_scheduler", 00:05:29.605 "params": { 00:05:29.605 "name": "static" 00:05:29.605 } 00:05:29.605 } 00:05:29.605 ] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "vhost_scsi", 00:05:29.605 "config": [] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "vhost_blk", 00:05:29.605 "config": [] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "ublk", 00:05:29.605 "config": [] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "nbd", 00:05:29.605 "config": [] 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "subsystem": "nvmf", 00:05:29.605 "config": [ 00:05:29.605 { 00:05:29.605 "method": "nvmf_set_config", 00:05:29.605 "params": { 00:05:29.605 "discovery_filter": "match_any", 00:05:29.605 "admin_cmd_passthru": { 00:05:29.605 "identify_ctrlr": false 00:05:29.605 }, 00:05:29.605 "dhchap_digests": [ 00:05:29.605 "sha256", 00:05:29.605 "sha384", 00:05:29.605 "sha512" 00:05:29.605 ], 00:05:29.605 "dhchap_dhgroups": [ 00:05:29.605 "null", 00:05:29.605 "ffdhe2048", 00:05:29.605 "ffdhe3072", 00:05:29.605 "ffdhe4096", 00:05:29.605 "ffdhe6144", 00:05:29.605 "ffdhe8192" 00:05:29.605 ] 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "nvmf_set_max_subsystems", 00:05:29.605 "params": { 00:05:29.605 "max_subsystems": 1024 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "nvmf_set_crdt", 00:05:29.605 "params": { 00:05:29.605 "crdt1": 0, 00:05:29.605 "crdt2": 0, 00:05:29.605 "crdt3": 0 00:05:29.605 } 00:05:29.605 }, 00:05:29.605 { 00:05:29.605 "method": "nvmf_create_transport", 00:05:29.605 "params": { 00:05:29.605 "trtype": "TCP", 00:05:29.605 "max_queue_depth": 128, 00:05:29.605 "max_io_qpairs_per_ctrlr": 127, 00:05:29.605 "in_capsule_data_size": 4096, 00:05:29.605 "max_io_size": 131072, 00:05:29.605 "io_unit_size": 131072, 00:05:29.605 "max_aq_depth": 128, 00:05:29.605 "num_shared_buffers": 511, 00:05:29.605 "buf_cache_size": 4294967295, 00:05:29.605 "dif_insert_or_strip": false, 00:05:29.605 "zcopy": false, 00:05:29.606 "c2h_success": true, 00:05:29.606 "sock_priority": 0, 00:05:29.606 "abort_timeout_sec": 1, 00:05:29.606 "ack_timeout": 0, 00:05:29.606 "data_wr_pool_size": 0 00:05:29.606 } 00:05:29.606 } 00:05:29.606 ] 00:05:29.606 }, 00:05:29.606 { 00:05:29.606 "subsystem": "iscsi", 00:05:29.606 "config": [ 00:05:29.606 { 00:05:29.606 "method": "iscsi_set_options", 00:05:29.606 "params": { 00:05:29.606 "node_base": "iqn.2016-06.io.spdk", 00:05:29.606 "max_sessions": 128, 00:05:29.606 "max_connections_per_session": 2, 00:05:29.606 "max_queue_depth": 64, 00:05:29.606 "default_time2wait": 2, 00:05:29.606 "default_time2retain": 20, 00:05:29.606 "first_burst_length": 8192, 00:05:29.606 "immediate_data": true, 00:05:29.606 "allow_duplicated_isid": false, 00:05:29.606 "error_recovery_level": 0, 00:05:29.606 "nop_timeout": 60, 00:05:29.606 "nop_in_interval": 30, 00:05:29.606 "disable_chap": false, 00:05:29.606 "require_chap": false, 00:05:29.606 "mutual_chap": false, 00:05:29.606 "chap_group": 0, 00:05:29.606 "max_large_datain_per_connection": 64, 00:05:29.606 "max_r2t_per_connection": 4, 00:05:29.606 "pdu_pool_size": 36864, 00:05:29.606 "immediate_data_pool_size": 16384, 00:05:29.606 "data_out_pool_size": 2048 00:05:29.606 } 00:05:29.606 } 00:05:29.606 ] 00:05:29.606 } 00:05:29.606 ] 00:05:29.606 } 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57058 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57058 ']' 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57058 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57058 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.606 killing process with pid 57058 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57058' 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57058 00:05:29.606 11:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57058 00:05:32.138 11:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57114 00:05:32.138 11:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:32.138 11:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57114 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57114 ']' 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57114 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57114 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:37.420 killing process with pid 57114 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57114' 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57114 00:05:37.420 11:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57114 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.321 00:05:39.321 real 0m11.103s 00:05:39.321 user 0m10.326s 00:05:39.321 sys 0m1.185s 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.321 ************************************ 00:05:39.321 END TEST skip_rpc_with_json 00:05:39.321 ************************************ 00:05:39.321 11:17:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.321 11:17:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.321 11:17:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.321 11:17:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.321 ************************************ 00:05:39.321 START TEST skip_rpc_with_delay 00:05:39.321 ************************************ 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.321 [2024-11-15 11:17:22.194107] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.321 00:05:39.321 real 0m0.187s 00:05:39.321 user 0m0.105s 00:05:39.321 sys 0m0.079s 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.321 11:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.321 ************************************ 00:05:39.321 END TEST skip_rpc_with_delay 00:05:39.321 ************************************ 00:05:39.580 11:17:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.580 11:17:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.580 11:17:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.580 11:17:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.580 11:17:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.580 11:17:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.580 ************************************ 00:05:39.580 START TEST exit_on_failed_rpc_init 00:05:39.580 ************************************ 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57248 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57248 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57248 ']' 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.580 11:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.580 [2024-11-15 11:17:22.458139] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:39.580 [2024-11-15 11:17:22.458362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57248 ] 00:05:39.838 [2024-11-15 11:17:22.640477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.838 [2024-11-15 11:17:22.767926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:40.774 11:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.033 [2024-11-15 11:17:23.788292] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:41.033 [2024-11-15 11:17:23.788485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57266 ] 00:05:41.033 [2024-11-15 11:17:23.980370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.291 [2024-11-15 11:17:24.134964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.291 [2024-11-15 11:17:24.135111] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:41.291 [2024-11-15 11:17:24.135134] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:41.291 [2024-11-15 11:17:24.135150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57248 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57248 ']' 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57248 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57248 00:05:41.550 killing process with pid 57248 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57248' 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57248 00:05:41.550 11:17:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57248 00:05:44.125 00:05:44.125 real 0m4.241s 00:05:44.125 user 0m4.525s 00:05:44.125 sys 0m0.774s 00:05:44.125 11:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.125 ************************************ 00:05:44.125 END TEST exit_on_failed_rpc_init 00:05:44.125 ************************************ 00:05:44.125 11:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.125 11:17:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:44.125 00:05:44.125 real 0m23.201s 00:05:44.125 user 0m21.707s 00:05:44.125 sys 0m2.841s 00:05:44.125 11:17:26 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.125 11:17:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.125 ************************************ 00:05:44.126 END TEST skip_rpc 00:05:44.126 ************************************ 00:05:44.126 11:17:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:44.126 11:17:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.126 11:17:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.126 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:05:44.126 ************************************ 00:05:44.126 START TEST rpc_client 00:05:44.126 ************************************ 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:44.126 * Looking for test storage... 00:05:44.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.126 11:17:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.126 --rc genhtml_branch_coverage=1 00:05:44.126 --rc genhtml_function_coverage=1 00:05:44.126 --rc genhtml_legend=1 00:05:44.126 --rc geninfo_all_blocks=1 00:05:44.126 --rc geninfo_unexecuted_blocks=1 00:05:44.126 00:05:44.126 ' 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.126 --rc genhtml_branch_coverage=1 00:05:44.126 --rc genhtml_function_coverage=1 00:05:44.126 --rc genhtml_legend=1 00:05:44.126 --rc geninfo_all_blocks=1 00:05:44.126 --rc geninfo_unexecuted_blocks=1 00:05:44.126 00:05:44.126 ' 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.126 --rc genhtml_branch_coverage=1 00:05:44.126 --rc genhtml_function_coverage=1 00:05:44.126 --rc genhtml_legend=1 00:05:44.126 --rc geninfo_all_blocks=1 00:05:44.126 --rc geninfo_unexecuted_blocks=1 00:05:44.126 00:05:44.126 ' 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.126 --rc genhtml_branch_coverage=1 00:05:44.126 --rc genhtml_function_coverage=1 00:05:44.126 --rc genhtml_legend=1 00:05:44.126 --rc geninfo_all_blocks=1 00:05:44.126 --rc geninfo_unexecuted_blocks=1 00:05:44.126 00:05:44.126 ' 00:05:44.126 11:17:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:44.126 OK 00:05:44.126 11:17:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:44.126 00:05:44.126 real 0m0.269s 00:05:44.126 user 0m0.157s 00:05:44.126 sys 0m0.116s 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.126 11:17:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:44.126 ************************************ 00:05:44.126 END TEST rpc_client 00:05:44.126 ************************************ 00:05:44.126 11:17:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:44.126 11:17:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.126 11:17:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.126 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:05:44.126 ************************************ 00:05:44.126 START TEST json_config 00:05:44.126 ************************************ 00:05:44.126 11:17:26 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:44.126 11:17:27 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.126 11:17:27 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.126 11:17:27 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.385 11:17:27 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.385 11:17:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.385 11:17:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.385 11:17:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.385 11:17:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.385 11:17:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.385 11:17:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.385 11:17:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.385 11:17:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.385 11:17:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.385 11:17:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.385 11:17:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.385 11:17:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:44.385 11:17:27 json_config -- scripts/common.sh@345 -- # : 1 00:05:44.385 11:17:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.385 11:17:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.385 11:17:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:44.385 11:17:27 json_config -- scripts/common.sh@353 -- # local d=1 00:05:44.385 11:17:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.385 11:17:27 json_config -- scripts/common.sh@355 -- # echo 1 00:05:44.385 11:17:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.385 11:17:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:44.385 11:17:27 json_config -- scripts/common.sh@353 -- # local d=2 00:05:44.385 11:17:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.385 11:17:27 json_config -- scripts/common.sh@355 -- # echo 2 00:05:44.385 11:17:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.385 11:17:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.385 11:17:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.385 11:17:27 json_config -- scripts/common.sh@368 -- # return 0 00:05:44.385 11:17:27 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.385 11:17:27 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.385 --rc genhtml_branch_coverage=1 00:05:44.385 --rc genhtml_function_coverage=1 00:05:44.385 --rc genhtml_legend=1 00:05:44.385 --rc geninfo_all_blocks=1 00:05:44.385 --rc geninfo_unexecuted_blocks=1 00:05:44.385 00:05:44.385 ' 00:05:44.385 11:17:27 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.385 --rc genhtml_branch_coverage=1 00:05:44.385 --rc genhtml_function_coverage=1 00:05:44.385 --rc genhtml_legend=1 00:05:44.385 --rc geninfo_all_blocks=1 00:05:44.385 --rc geninfo_unexecuted_blocks=1 00:05:44.385 00:05:44.385 ' 00:05:44.385 11:17:27 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.385 --rc genhtml_branch_coverage=1 00:05:44.385 --rc genhtml_function_coverage=1 00:05:44.385 --rc genhtml_legend=1 00:05:44.385 --rc geninfo_all_blocks=1 00:05:44.385 --rc geninfo_unexecuted_blocks=1 00:05:44.385 00:05:44.385 ' 00:05:44.385 11:17:27 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.385 --rc genhtml_branch_coverage=1 00:05:44.385 --rc genhtml_function_coverage=1 00:05:44.385 --rc genhtml_legend=1 00:05:44.385 --rc geninfo_all_blocks=1 00:05:44.385 --rc geninfo_unexecuted_blocks=1 00:05:44.385 00:05:44.385 ' 00:05:44.385 11:17:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5560c3f1-84d4-440d-a043-db521604d4ff 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5560c3f1-84d4-440d-a043-db521604d4ff 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.385 11:17:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:44.386 11:17:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.386 11:17:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.386 11:17:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.386 11:17:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.386 11:17:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.386 11:17:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.386 11:17:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.386 11:17:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:44.386 11:17:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@51 -- # : 0 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.386 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.386 11:17:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.386 11:17:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:44.386 11:17:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:44.386 11:17:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:44.386 11:17:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:44.386 11:17:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.386 11:17:27 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:44.386 WARNING: No tests are enabled so not running JSON configuration tests 00:05:44.386 11:17:27 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:44.386 00:05:44.386 real 0m0.187s 00:05:44.386 user 0m0.117s 00:05:44.386 sys 0m0.073s 00:05:44.386 11:17:27 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.386 11:17:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.386 ************************************ 00:05:44.386 END TEST json_config 00:05:44.386 ************************************ 00:05:44.386 11:17:27 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:44.386 11:17:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.386 11:17:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.386 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:05:44.386 ************************************ 00:05:44.386 START TEST json_config_extra_key 00:05:44.386 ************************************ 00:05:44.386 11:17:27 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:44.386 11:17:27 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.386 11:17:27 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.386 11:17:27 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.645 11:17:27 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.645 11:17:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:44.645 11:17:27 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.645 11:17:27 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.645 --rc genhtml_branch_coverage=1 00:05:44.645 --rc genhtml_function_coverage=1 00:05:44.646 --rc genhtml_legend=1 00:05:44.646 --rc geninfo_all_blocks=1 00:05:44.646 --rc geninfo_unexecuted_blocks=1 00:05:44.646 00:05:44.646 ' 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.646 --rc genhtml_branch_coverage=1 00:05:44.646 --rc genhtml_function_coverage=1 00:05:44.646 --rc genhtml_legend=1 00:05:44.646 --rc geninfo_all_blocks=1 00:05:44.646 --rc geninfo_unexecuted_blocks=1 00:05:44.646 00:05:44.646 ' 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.646 --rc genhtml_branch_coverage=1 00:05:44.646 --rc genhtml_function_coverage=1 00:05:44.646 --rc genhtml_legend=1 00:05:44.646 --rc geninfo_all_blocks=1 00:05:44.646 --rc geninfo_unexecuted_blocks=1 00:05:44.646 00:05:44.646 ' 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.646 --rc genhtml_branch_coverage=1 00:05:44.646 --rc genhtml_function_coverage=1 00:05:44.646 --rc genhtml_legend=1 00:05:44.646 --rc geninfo_all_blocks=1 00:05:44.646 --rc geninfo_unexecuted_blocks=1 00:05:44.646 00:05:44.646 ' 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5560c3f1-84d4-440d-a043-db521604d4ff 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5560c3f1-84d4-440d-a043-db521604d4ff 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:44.646 11:17:27 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.646 11:17:27 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.646 11:17:27 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.646 11:17:27 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.646 11:17:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.646 11:17:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.646 11:17:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.646 11:17:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:44.646 11:17:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.646 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.646 11:17:27 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.646 INFO: launching applications... 00:05:44.646 Waiting for target to run... 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:44.646 11:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57476 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57476 /var/tmp/spdk_tgt.sock 00:05:44.646 11:17:27 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57476 ']' 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.646 11:17:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.646 [2024-11-15 11:17:27.537795] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:44.646 [2024-11-15 11:17:27.538237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57476 ] 00:05:45.214 [2024-11-15 11:17:28.018253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.472 [2024-11-15 11:17:28.177500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.042 11:17:28 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.042 11:17:28 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:46.042 00:05:46.042 11:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:46.042 INFO: shutting down applications... 00:05:46.042 11:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57476 ]] 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57476 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57476 00:05:46.042 11:17:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.611 11:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.611 11:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.611 11:17:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57476 00:05:46.611 11:17:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.180 11:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.180 11:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.180 11:17:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57476 00:05:47.180 11:17:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.439 11:17:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.440 11:17:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.440 11:17:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57476 00:05:47.440 11:17:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.008 11:17:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.008 11:17:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.008 11:17:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57476 00:05:48.008 11:17:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.576 11:17:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.576 11:17:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.576 11:17:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57476 00:05:48.576 11:17:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.576 11:17:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:48.576 SPDK target shutdown done 00:05:48.576 11:17:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.576 11:17:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.576 Success 00:05:48.576 11:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:48.576 ************************************ 00:05:48.576 END TEST json_config_extra_key 00:05:48.576 ************************************ 00:05:48.576 00:05:48.576 real 0m4.131s 00:05:48.576 user 0m3.855s 00:05:48.576 sys 0m0.661s 00:05:48.576 11:17:31 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.576 11:17:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.576 11:17:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.576 11:17:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.576 11:17:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.576 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:05:48.576 ************************************ 00:05:48.576 START TEST alias_rpc 00:05:48.576 ************************************ 00:05:48.576 11:17:31 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.576 * Looking for test storage... 00:05:48.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:48.576 11:17:31 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.576 11:17:31 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.576 11:17:31 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.836 11:17:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.836 --rc genhtml_branch_coverage=1 00:05:48.836 --rc genhtml_function_coverage=1 00:05:48.836 --rc genhtml_legend=1 00:05:48.836 --rc geninfo_all_blocks=1 00:05:48.836 --rc geninfo_unexecuted_blocks=1 00:05:48.836 00:05:48.836 ' 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.836 --rc genhtml_branch_coverage=1 00:05:48.836 --rc genhtml_function_coverage=1 00:05:48.836 --rc genhtml_legend=1 00:05:48.836 --rc geninfo_all_blocks=1 00:05:48.836 --rc geninfo_unexecuted_blocks=1 00:05:48.836 00:05:48.836 ' 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.836 --rc genhtml_branch_coverage=1 00:05:48.836 --rc genhtml_function_coverage=1 00:05:48.836 --rc genhtml_legend=1 00:05:48.836 --rc geninfo_all_blocks=1 00:05:48.836 --rc geninfo_unexecuted_blocks=1 00:05:48.836 00:05:48.836 ' 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.836 --rc genhtml_branch_coverage=1 00:05:48.836 --rc genhtml_function_coverage=1 00:05:48.836 --rc genhtml_legend=1 00:05:48.836 --rc geninfo_all_blocks=1 00:05:48.836 --rc geninfo_unexecuted_blocks=1 00:05:48.836 00:05:48.836 ' 00:05:48.836 11:17:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.836 11:17:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57581 00:05:48.836 11:17:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.836 11:17:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57581 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57581 ']' 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.836 11:17:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.836 [2024-11-15 11:17:31.728611] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:48.836 [2024-11-15 11:17:31.728802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57581 ] 00:05:49.095 [2024-11-15 11:17:31.910254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.354 [2024-11-15 11:17:32.045868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.316 11:17:32 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.316 11:17:32 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:50.316 11:17:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:50.575 11:17:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57581 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57581 ']' 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57581 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57581 00:05:50.575 killing process with pid 57581 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57581' 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@971 -- # kill 57581 00:05:50.575 11:17:33 alias_rpc -- common/autotest_common.sh@976 -- # wait 57581 00:05:52.488 ************************************ 00:05:52.488 END TEST alias_rpc 00:05:52.488 ************************************ 00:05:52.488 00:05:52.488 real 0m4.015s 00:05:52.488 user 0m4.019s 00:05:52.488 sys 0m0.773s 00:05:52.488 11:17:35 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.488 11:17:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.748 11:17:35 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:52.748 11:17:35 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:52.748 11:17:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.748 11:17:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.748 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:05:52.748 ************************************ 00:05:52.748 START TEST spdkcli_tcp 00:05:52.748 ************************************ 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:52.748 * Looking for test storage... 00:05:52.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.748 11:17:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.748 --rc genhtml_branch_coverage=1 00:05:52.748 --rc genhtml_function_coverage=1 00:05:52.748 --rc genhtml_legend=1 00:05:52.748 --rc geninfo_all_blocks=1 00:05:52.748 --rc geninfo_unexecuted_blocks=1 00:05:52.748 00:05:52.748 ' 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.748 --rc genhtml_branch_coverage=1 00:05:52.748 --rc genhtml_function_coverage=1 00:05:52.748 --rc genhtml_legend=1 00:05:52.748 --rc geninfo_all_blocks=1 00:05:52.748 --rc geninfo_unexecuted_blocks=1 00:05:52.748 00:05:52.748 ' 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.748 --rc genhtml_branch_coverage=1 00:05:52.748 --rc genhtml_function_coverage=1 00:05:52.748 --rc genhtml_legend=1 00:05:52.748 --rc geninfo_all_blocks=1 00:05:52.748 --rc geninfo_unexecuted_blocks=1 00:05:52.748 00:05:52.748 ' 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.748 --rc genhtml_branch_coverage=1 00:05:52.748 --rc genhtml_function_coverage=1 00:05:52.748 --rc genhtml_legend=1 00:05:52.748 --rc geninfo_all_blocks=1 00:05:52.748 --rc geninfo_unexecuted_blocks=1 00:05:52.748 00:05:52.748 ' 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57688 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:52.748 11:17:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57688 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57688 ']' 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.748 11:17:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.008 [2024-11-15 11:17:35.806988] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:53.008 [2024-11-15 11:17:35.807505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57688 ] 00:05:53.267 [2024-11-15 11:17:35.993000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.267 [2024-11-15 11:17:36.121522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.267 [2024-11-15 11:17:36.121535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.205 11:17:36 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.205 11:17:36 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:54.205 11:17:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57705 00:05:54.205 11:17:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:54.205 11:17:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:54.465 [ 00:05:54.465 "bdev_malloc_delete", 00:05:54.465 "bdev_malloc_create", 00:05:54.465 "bdev_null_resize", 00:05:54.465 "bdev_null_delete", 00:05:54.465 "bdev_null_create", 00:05:54.465 "bdev_nvme_cuse_unregister", 00:05:54.465 "bdev_nvme_cuse_register", 00:05:54.465 "bdev_opal_new_user", 00:05:54.465 "bdev_opal_set_lock_state", 00:05:54.465 "bdev_opal_delete", 00:05:54.465 "bdev_opal_get_info", 00:05:54.465 "bdev_opal_create", 00:05:54.465 "bdev_nvme_opal_revert", 00:05:54.465 "bdev_nvme_opal_init", 00:05:54.465 "bdev_nvme_send_cmd", 00:05:54.465 "bdev_nvme_set_keys", 00:05:54.465 "bdev_nvme_get_path_iostat", 00:05:54.465 "bdev_nvme_get_mdns_discovery_info", 00:05:54.465 "bdev_nvme_stop_mdns_discovery", 00:05:54.465 "bdev_nvme_start_mdns_discovery", 00:05:54.465 "bdev_nvme_set_multipath_policy", 00:05:54.465 "bdev_nvme_set_preferred_path", 00:05:54.465 "bdev_nvme_get_io_paths", 00:05:54.465 "bdev_nvme_remove_error_injection", 00:05:54.465 "bdev_nvme_add_error_injection", 00:05:54.465 "bdev_nvme_get_discovery_info", 00:05:54.465 "bdev_nvme_stop_discovery", 00:05:54.465 "bdev_nvme_start_discovery", 00:05:54.465 "bdev_nvme_get_controller_health_info", 00:05:54.465 "bdev_nvme_disable_controller", 00:05:54.465 "bdev_nvme_enable_controller", 00:05:54.465 "bdev_nvme_reset_controller", 00:05:54.465 "bdev_nvme_get_transport_statistics", 00:05:54.465 "bdev_nvme_apply_firmware", 00:05:54.465 "bdev_nvme_detach_controller", 00:05:54.465 "bdev_nvme_get_controllers", 00:05:54.465 "bdev_nvme_attach_controller", 00:05:54.465 "bdev_nvme_set_hotplug", 00:05:54.465 "bdev_nvme_set_options", 00:05:54.465 "bdev_passthru_delete", 00:05:54.465 "bdev_passthru_create", 00:05:54.465 "bdev_lvol_set_parent_bdev", 00:05:54.465 "bdev_lvol_set_parent", 00:05:54.465 "bdev_lvol_check_shallow_copy", 00:05:54.465 "bdev_lvol_start_shallow_copy", 00:05:54.465 "bdev_lvol_grow_lvstore", 00:05:54.465 "bdev_lvol_get_lvols", 00:05:54.465 "bdev_lvol_get_lvstores", 00:05:54.465 "bdev_lvol_delete", 00:05:54.465 "bdev_lvol_set_read_only", 00:05:54.465 "bdev_lvol_resize", 00:05:54.465 "bdev_lvol_decouple_parent", 00:05:54.465 "bdev_lvol_inflate", 00:05:54.465 "bdev_lvol_rename", 00:05:54.465 "bdev_lvol_clone_bdev", 00:05:54.465 "bdev_lvol_clone", 00:05:54.465 "bdev_lvol_snapshot", 00:05:54.465 "bdev_lvol_create", 00:05:54.465 "bdev_lvol_delete_lvstore", 00:05:54.465 "bdev_lvol_rename_lvstore", 00:05:54.465 "bdev_lvol_create_lvstore", 00:05:54.465 "bdev_raid_set_options", 00:05:54.465 "bdev_raid_remove_base_bdev", 00:05:54.465 "bdev_raid_add_base_bdev", 00:05:54.465 "bdev_raid_delete", 00:05:54.465 "bdev_raid_create", 00:05:54.465 "bdev_raid_get_bdevs", 00:05:54.465 "bdev_error_inject_error", 00:05:54.465 "bdev_error_delete", 00:05:54.465 "bdev_error_create", 00:05:54.465 "bdev_split_delete", 00:05:54.465 "bdev_split_create", 00:05:54.465 "bdev_delay_delete", 00:05:54.465 "bdev_delay_create", 00:05:54.465 "bdev_delay_update_latency", 00:05:54.465 "bdev_zone_block_delete", 00:05:54.465 "bdev_zone_block_create", 00:05:54.465 "blobfs_create", 00:05:54.465 "blobfs_detect", 00:05:54.465 "blobfs_set_cache_size", 00:05:54.465 "bdev_aio_delete", 00:05:54.465 "bdev_aio_rescan", 00:05:54.465 "bdev_aio_create", 00:05:54.465 "bdev_ftl_set_property", 00:05:54.465 "bdev_ftl_get_properties", 00:05:54.465 "bdev_ftl_get_stats", 00:05:54.465 "bdev_ftl_unmap", 00:05:54.465 "bdev_ftl_unload", 00:05:54.465 "bdev_ftl_delete", 00:05:54.465 "bdev_ftl_load", 00:05:54.465 "bdev_ftl_create", 00:05:54.465 "bdev_virtio_attach_controller", 00:05:54.465 "bdev_virtio_scsi_get_devices", 00:05:54.465 "bdev_virtio_detach_controller", 00:05:54.465 "bdev_virtio_blk_set_hotplug", 00:05:54.465 "bdev_iscsi_delete", 00:05:54.465 "bdev_iscsi_create", 00:05:54.465 "bdev_iscsi_set_options", 00:05:54.465 "accel_error_inject_error", 00:05:54.465 "ioat_scan_accel_module", 00:05:54.465 "dsa_scan_accel_module", 00:05:54.465 "iaa_scan_accel_module", 00:05:54.465 "keyring_file_remove_key", 00:05:54.465 "keyring_file_add_key", 00:05:54.465 "keyring_linux_set_options", 00:05:54.465 "fsdev_aio_delete", 00:05:54.465 "fsdev_aio_create", 00:05:54.465 "iscsi_get_histogram", 00:05:54.465 "iscsi_enable_histogram", 00:05:54.465 "iscsi_set_options", 00:05:54.465 "iscsi_get_auth_groups", 00:05:54.465 "iscsi_auth_group_remove_secret", 00:05:54.465 "iscsi_auth_group_add_secret", 00:05:54.465 "iscsi_delete_auth_group", 00:05:54.465 "iscsi_create_auth_group", 00:05:54.465 "iscsi_set_discovery_auth", 00:05:54.465 "iscsi_get_options", 00:05:54.465 "iscsi_target_node_request_logout", 00:05:54.465 "iscsi_target_node_set_redirect", 00:05:54.465 "iscsi_target_node_set_auth", 00:05:54.465 "iscsi_target_node_add_lun", 00:05:54.465 "iscsi_get_stats", 00:05:54.465 "iscsi_get_connections", 00:05:54.465 "iscsi_portal_group_set_auth", 00:05:54.465 "iscsi_start_portal_group", 00:05:54.465 "iscsi_delete_portal_group", 00:05:54.465 "iscsi_create_portal_group", 00:05:54.465 "iscsi_get_portal_groups", 00:05:54.465 "iscsi_delete_target_node", 00:05:54.465 "iscsi_target_node_remove_pg_ig_maps", 00:05:54.465 "iscsi_target_node_add_pg_ig_maps", 00:05:54.465 "iscsi_create_target_node", 00:05:54.465 "iscsi_get_target_nodes", 00:05:54.465 "iscsi_delete_initiator_group", 00:05:54.465 "iscsi_initiator_group_remove_initiators", 00:05:54.465 "iscsi_initiator_group_add_initiators", 00:05:54.465 "iscsi_create_initiator_group", 00:05:54.465 "iscsi_get_initiator_groups", 00:05:54.465 "nvmf_set_crdt", 00:05:54.465 "nvmf_set_config", 00:05:54.465 "nvmf_set_max_subsystems", 00:05:54.465 "nvmf_stop_mdns_prr", 00:05:54.465 "nvmf_publish_mdns_prr", 00:05:54.465 "nvmf_subsystem_get_listeners", 00:05:54.465 "nvmf_subsystem_get_qpairs", 00:05:54.465 "nvmf_subsystem_get_controllers", 00:05:54.465 "nvmf_get_stats", 00:05:54.465 "nvmf_get_transports", 00:05:54.465 "nvmf_create_transport", 00:05:54.465 "nvmf_get_targets", 00:05:54.465 "nvmf_delete_target", 00:05:54.465 "nvmf_create_target", 00:05:54.465 "nvmf_subsystem_allow_any_host", 00:05:54.465 "nvmf_subsystem_set_keys", 00:05:54.465 "nvmf_subsystem_remove_host", 00:05:54.465 "nvmf_subsystem_add_host", 00:05:54.465 "nvmf_ns_remove_host", 00:05:54.465 "nvmf_ns_add_host", 00:05:54.465 "nvmf_subsystem_remove_ns", 00:05:54.465 "nvmf_subsystem_set_ns_ana_group", 00:05:54.465 "nvmf_subsystem_add_ns", 00:05:54.465 "nvmf_subsystem_listener_set_ana_state", 00:05:54.465 "nvmf_discovery_get_referrals", 00:05:54.465 "nvmf_discovery_remove_referral", 00:05:54.465 "nvmf_discovery_add_referral", 00:05:54.465 "nvmf_subsystem_remove_listener", 00:05:54.465 "nvmf_subsystem_add_listener", 00:05:54.465 "nvmf_delete_subsystem", 00:05:54.465 "nvmf_create_subsystem", 00:05:54.465 "nvmf_get_subsystems", 00:05:54.465 "env_dpdk_get_mem_stats", 00:05:54.465 "nbd_get_disks", 00:05:54.465 "nbd_stop_disk", 00:05:54.465 "nbd_start_disk", 00:05:54.465 "ublk_recover_disk", 00:05:54.465 "ublk_get_disks", 00:05:54.465 "ublk_stop_disk", 00:05:54.465 "ublk_start_disk", 00:05:54.465 "ublk_destroy_target", 00:05:54.465 "ublk_create_target", 00:05:54.465 "virtio_blk_create_transport", 00:05:54.465 "virtio_blk_get_transports", 00:05:54.465 "vhost_controller_set_coalescing", 00:05:54.465 "vhost_get_controllers", 00:05:54.465 "vhost_delete_controller", 00:05:54.465 "vhost_create_blk_controller", 00:05:54.465 "vhost_scsi_controller_remove_target", 00:05:54.465 "vhost_scsi_controller_add_target", 00:05:54.465 "vhost_start_scsi_controller", 00:05:54.465 "vhost_create_scsi_controller", 00:05:54.465 "thread_set_cpumask", 00:05:54.465 "scheduler_set_options", 00:05:54.465 "framework_get_governor", 00:05:54.465 "framework_get_scheduler", 00:05:54.465 "framework_set_scheduler", 00:05:54.465 "framework_get_reactors", 00:05:54.465 "thread_get_io_channels", 00:05:54.465 "thread_get_pollers", 00:05:54.465 "thread_get_stats", 00:05:54.465 "framework_monitor_context_switch", 00:05:54.465 "spdk_kill_instance", 00:05:54.465 "log_enable_timestamps", 00:05:54.465 "log_get_flags", 00:05:54.465 "log_clear_flag", 00:05:54.465 "log_set_flag", 00:05:54.465 "log_get_level", 00:05:54.465 "log_set_level", 00:05:54.465 "log_get_print_level", 00:05:54.466 "log_set_print_level", 00:05:54.466 "framework_enable_cpumask_locks", 00:05:54.466 "framework_disable_cpumask_locks", 00:05:54.466 "framework_wait_init", 00:05:54.466 "framework_start_init", 00:05:54.466 "scsi_get_devices", 00:05:54.466 "bdev_get_histogram", 00:05:54.466 "bdev_enable_histogram", 00:05:54.466 "bdev_set_qos_limit", 00:05:54.466 "bdev_set_qd_sampling_period", 00:05:54.466 "bdev_get_bdevs", 00:05:54.466 "bdev_reset_iostat", 00:05:54.466 "bdev_get_iostat", 00:05:54.466 "bdev_examine", 00:05:54.466 "bdev_wait_for_examine", 00:05:54.466 "bdev_set_options", 00:05:54.466 "accel_get_stats", 00:05:54.466 "accel_set_options", 00:05:54.466 "accel_set_driver", 00:05:54.466 "accel_crypto_key_destroy", 00:05:54.466 "accel_crypto_keys_get", 00:05:54.466 "accel_crypto_key_create", 00:05:54.466 "accel_assign_opc", 00:05:54.466 "accel_get_module_info", 00:05:54.466 "accel_get_opc_assignments", 00:05:54.466 "vmd_rescan", 00:05:54.466 "vmd_remove_device", 00:05:54.466 "vmd_enable", 00:05:54.466 "sock_get_default_impl", 00:05:54.466 "sock_set_default_impl", 00:05:54.466 "sock_impl_set_options", 00:05:54.466 "sock_impl_get_options", 00:05:54.466 "iobuf_get_stats", 00:05:54.466 "iobuf_set_options", 00:05:54.466 "keyring_get_keys", 00:05:54.466 "framework_get_pci_devices", 00:05:54.466 "framework_get_config", 00:05:54.466 "framework_get_subsystems", 00:05:54.466 "fsdev_set_opts", 00:05:54.466 "fsdev_get_opts", 00:05:54.466 "trace_get_info", 00:05:54.466 "trace_get_tpoint_group_mask", 00:05:54.466 "trace_disable_tpoint_group", 00:05:54.466 "trace_enable_tpoint_group", 00:05:54.466 "trace_clear_tpoint_mask", 00:05:54.466 "trace_set_tpoint_mask", 00:05:54.466 "notify_get_notifications", 00:05:54.466 "notify_get_types", 00:05:54.466 "spdk_get_version", 00:05:54.466 "rpc_get_methods" 00:05:54.466 ] 00:05:54.466 11:17:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.466 11:17:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:54.466 11:17:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57688 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57688 ']' 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57688 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57688 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:54.466 killing process with pid 57688 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57688' 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57688 00:05:54.466 11:17:37 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57688 00:05:57.002 ************************************ 00:05:57.002 END TEST spdkcli_tcp 00:05:57.002 ************************************ 00:05:57.002 00:05:57.002 real 0m4.056s 00:05:57.002 user 0m7.169s 00:05:57.002 sys 0m0.788s 00:05:57.002 11:17:39 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.002 11:17:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.002 11:17:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.002 11:17:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.002 11:17:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.002 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:05:57.002 ************************************ 00:05:57.002 START TEST dpdk_mem_utility 00:05:57.002 ************************************ 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.002 * Looking for test storage... 00:05:57.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:57.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.002 11:17:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.002 --rc genhtml_branch_coverage=1 00:05:57.002 --rc genhtml_function_coverage=1 00:05:57.002 --rc genhtml_legend=1 00:05:57.002 --rc geninfo_all_blocks=1 00:05:57.002 --rc geninfo_unexecuted_blocks=1 00:05:57.002 00:05:57.002 ' 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.002 --rc genhtml_branch_coverage=1 00:05:57.002 --rc genhtml_function_coverage=1 00:05:57.002 --rc genhtml_legend=1 00:05:57.002 --rc geninfo_all_blocks=1 00:05:57.002 --rc geninfo_unexecuted_blocks=1 00:05:57.002 00:05:57.002 ' 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.002 --rc genhtml_branch_coverage=1 00:05:57.002 --rc genhtml_function_coverage=1 00:05:57.002 --rc genhtml_legend=1 00:05:57.002 --rc geninfo_all_blocks=1 00:05:57.002 --rc geninfo_unexecuted_blocks=1 00:05:57.002 00:05:57.002 ' 00:05:57.002 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.003 --rc genhtml_branch_coverage=1 00:05:57.003 --rc genhtml_function_coverage=1 00:05:57.003 --rc genhtml_legend=1 00:05:57.003 --rc geninfo_all_blocks=1 00:05:57.003 --rc geninfo_unexecuted_blocks=1 00:05:57.003 00:05:57.003 ' 00:05:57.003 11:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.003 11:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57810 00:05:57.003 11:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57810 00:05:57.003 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57810 ']' 00:05:57.003 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.003 11:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.003 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.003 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.003 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.003 11:17:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.003 [2024-11-15 11:17:39.900439] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:57.003 [2024-11-15 11:17:39.900861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57810 ] 00:05:57.261 [2024-11-15 11:17:40.084080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.520 [2024-11-15 11:17:40.216079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.459 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:58.459 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:58.459 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:58.459 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:58.459 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.459 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.459 { 00:05:58.459 "filename": "/tmp/spdk_mem_dump.txt" 00:05:58.459 } 00:05:58.459 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.459 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.459 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:58.459 1 heaps totaling size 824.000000 MiB 00:05:58.459 size: 824.000000 MiB heap id: 0 00:05:58.459 end heaps---------- 00:05:58.459 9 mempools totaling size 603.782043 MiB 00:05:58.459 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:58.459 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:58.459 size: 100.555481 MiB name: bdev_io_57810 00:05:58.459 size: 50.003479 MiB name: msgpool_57810 00:05:58.459 size: 36.509338 MiB name: fsdev_io_57810 00:05:58.459 size: 21.763794 MiB name: PDU_Pool 00:05:58.459 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:58.459 size: 4.133484 MiB name: evtpool_57810 00:05:58.459 size: 0.026123 MiB name: Session_Pool 00:05:58.459 end mempools------- 00:05:58.459 6 memzones totaling size 4.142822 MiB 00:05:58.459 size: 1.000366 MiB name: RG_ring_0_57810 00:05:58.459 size: 1.000366 MiB name: RG_ring_1_57810 00:05:58.459 size: 1.000366 MiB name: RG_ring_4_57810 00:05:58.459 size: 1.000366 MiB name: RG_ring_5_57810 00:05:58.459 size: 0.125366 MiB name: RG_ring_2_57810 00:05:58.459 size: 0.015991 MiB name: RG_ring_3_57810 00:05:58.459 end memzones------- 00:05:58.459 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:58.459 heap id: 0 total size: 824.000000 MiB number of busy elements: 321 number of free elements: 18 00:05:58.459 list of free elements. size: 16.779907 MiB 00:05:58.459 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:58.459 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:58.459 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:58.459 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:58.460 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:58.460 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:58.460 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:58.460 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:58.460 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:58.460 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:58.460 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:58.460 element at address: 0x20001b400000 with size: 0.561462 MiB 00:05:58.460 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:58.460 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:58.460 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:58.460 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:58.460 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:58.460 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:58.460 list of standard malloc elements. size: 199.289185 MiB 00:05:58.460 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:58.460 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:58.460 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:58.460 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:58.460 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:58.460 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:58.460 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:58.460 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:58.460 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:58.460 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:58.460 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:58.460 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:58.460 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:58.460 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:58.461 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:58.461 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:58.461 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:58.461 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:58.461 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:58.462 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:58.462 list of memzone associated elements. size: 607.930908 MiB 00:05:58.462 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:58.462 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:58.462 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:58.462 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:58.462 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:58.462 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57810_0 00:05:58.462 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:58.462 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57810_0 00:05:58.462 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:58.462 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57810_0 00:05:58.462 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:58.462 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:58.462 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:58.462 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:58.462 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:58.462 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57810_0 00:05:58.462 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:58.462 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57810 00:05:58.462 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:58.462 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57810 00:05:58.462 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:58.462 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:58.462 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:58.462 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:58.462 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:58.462 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:58.462 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:58.462 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:58.462 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:58.462 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57810 00:05:58.462 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:58.462 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57810 00:05:58.462 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:58.462 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57810 00:05:58.462 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:58.462 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57810 00:05:58.462 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:58.462 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57810 00:05:58.462 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:58.462 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57810 00:05:58.462 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:58.462 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:58.462 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:58.462 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:58.462 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:58.462 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:58.462 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:58.462 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57810 00:05:58.462 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:58.462 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57810 00:05:58.462 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:58.462 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:58.462 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:58.462 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:58.462 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:58.462 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57810 00:05:58.462 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:58.462 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:58.462 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:58.462 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57810 00:05:58.462 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:58.462 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57810 00:05:58.462 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:58.462 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57810 00:05:58.462 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:58.462 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:58.462 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:58.462 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57810 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57810 ']' 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57810 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57810 00:05:58.462 killing process with pid 57810 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57810' 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57810 00:05:58.462 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57810 00:06:00.999 00:06:00.999 real 0m3.798s 00:06:00.999 user 0m3.728s 00:06:00.999 sys 0m0.710s 00:06:00.999 11:17:43 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.999 ************************************ 00:06:00.999 END TEST dpdk_mem_utility 00:06:00.999 ************************************ 00:06:00.999 11:17:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.999 11:17:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.999 11:17:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.999 11:17:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.999 11:17:43 -- common/autotest_common.sh@10 -- # set +x 00:06:00.999 ************************************ 00:06:00.999 START TEST event 00:06:00.999 ************************************ 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.999 * Looking for test storage... 00:06:00.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.999 11:17:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.999 11:17:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.999 11:17:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.999 11:17:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.999 11:17:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.999 11:17:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.999 11:17:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.999 11:17:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.999 11:17:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.999 11:17:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.999 11:17:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.999 11:17:43 event -- scripts/common.sh@344 -- # case "$op" in 00:06:00.999 11:17:43 event -- scripts/common.sh@345 -- # : 1 00:06:00.999 11:17:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.999 11:17:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.999 11:17:43 event -- scripts/common.sh@365 -- # decimal 1 00:06:00.999 11:17:43 event -- scripts/common.sh@353 -- # local d=1 00:06:00.999 11:17:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.999 11:17:43 event -- scripts/common.sh@355 -- # echo 1 00:06:00.999 11:17:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.999 11:17:43 event -- scripts/common.sh@366 -- # decimal 2 00:06:00.999 11:17:43 event -- scripts/common.sh@353 -- # local d=2 00:06:00.999 11:17:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.999 11:17:43 event -- scripts/common.sh@355 -- # echo 2 00:06:00.999 11:17:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.999 11:17:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.999 11:17:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.999 11:17:43 event -- scripts/common.sh@368 -- # return 0 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.999 --rc genhtml_branch_coverage=1 00:06:00.999 --rc genhtml_function_coverage=1 00:06:00.999 --rc genhtml_legend=1 00:06:00.999 --rc geninfo_all_blocks=1 00:06:00.999 --rc geninfo_unexecuted_blocks=1 00:06:00.999 00:06:00.999 ' 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.999 --rc genhtml_branch_coverage=1 00:06:00.999 --rc genhtml_function_coverage=1 00:06:00.999 --rc genhtml_legend=1 00:06:00.999 --rc geninfo_all_blocks=1 00:06:00.999 --rc geninfo_unexecuted_blocks=1 00:06:00.999 00:06:00.999 ' 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.999 --rc genhtml_branch_coverage=1 00:06:00.999 --rc genhtml_function_coverage=1 00:06:00.999 --rc genhtml_legend=1 00:06:00.999 --rc geninfo_all_blocks=1 00:06:00.999 --rc geninfo_unexecuted_blocks=1 00:06:00.999 00:06:00.999 ' 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.999 --rc genhtml_branch_coverage=1 00:06:00.999 --rc genhtml_function_coverage=1 00:06:00.999 --rc genhtml_legend=1 00:06:00.999 --rc geninfo_all_blocks=1 00:06:00.999 --rc geninfo_unexecuted_blocks=1 00:06:00.999 00:06:00.999 ' 00:06:00.999 11:17:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:00.999 11:17:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.999 11:17:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:00.999 11:17:43 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.999 11:17:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.999 ************************************ 00:06:00.999 START TEST event_perf 00:06:00.999 ************************************ 00:06:00.999 11:17:43 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.999 Running I/O for 1 seconds...[2024-11-15 11:17:43.739289] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:00.999 [2024-11-15 11:17:43.740319] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57912 ] 00:06:00.999 [2024-11-15 11:17:43.926343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.259 [2024-11-15 11:17:44.071839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.259 [2024-11-15 11:17:44.071986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.259 [2024-11-15 11:17:44.072134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.259 Running I/O for 1 seconds...[2024-11-15 11:17:44.072151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.638 00:06:02.638 lcore 0: 130856 00:06:02.638 lcore 1: 130855 00:06:02.638 lcore 2: 130858 00:06:02.638 lcore 3: 130859 00:06:02.638 done. 00:06:02.638 00:06:02.638 real 0m1.614s 00:06:02.638 user 0m4.358s 00:06:02.638 sys 0m0.128s 00:06:02.638 11:17:45 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.638 11:17:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.638 ************************************ 00:06:02.638 END TEST event_perf 00:06:02.638 ************************************ 00:06:02.638 11:17:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:02.638 11:17:45 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:02.638 11:17:45 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.638 11:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.638 ************************************ 00:06:02.638 START TEST event_reactor 00:06:02.638 ************************************ 00:06:02.638 11:17:45 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:02.638 [2024-11-15 11:17:45.402172] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:02.638 [2024-11-15 11:17:45.402391] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57952 ] 00:06:02.638 [2024-11-15 11:17:45.573581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.897 [2024-11-15 11:17:45.700713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.275 test_start 00:06:04.275 oneshot 00:06:04.275 tick 100 00:06:04.275 tick 100 00:06:04.275 tick 250 00:06:04.275 tick 100 00:06:04.275 tick 100 00:06:04.275 tick 100 00:06:04.275 tick 250 00:06:04.275 tick 500 00:06:04.275 tick 100 00:06:04.275 tick 100 00:06:04.275 tick 250 00:06:04.275 tick 100 00:06:04.275 tick 100 00:06:04.275 test_end 00:06:04.275 00:06:04.275 real 0m1.569s 00:06:04.275 user 0m1.366s 00:06:04.275 sys 0m0.094s 00:06:04.275 11:17:46 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.275 ************************************ 00:06:04.275 END TEST event_reactor 00:06:04.275 ************************************ 00:06:04.275 11:17:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:04.275 11:17:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.275 11:17:46 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:04.275 11:17:46 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.275 11:17:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.275 ************************************ 00:06:04.275 START TEST event_reactor_perf 00:06:04.275 ************************************ 00:06:04.275 11:17:46 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.275 [2024-11-15 11:17:47.025766] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:04.275 [2024-11-15 11:17:47.025906] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57994 ] 00:06:04.275 [2024-11-15 11:17:47.187422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.534 [2024-11-15 11:17:47.321342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.913 test_start 00:06:05.913 test_end 00:06:05.913 Performance: 343055 events per second 00:06:05.913 00:06:05.913 real 0m1.541s 00:06:05.913 user 0m1.340s 00:06:05.913 sys 0m0.093s 00:06:05.913 11:17:48 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.913 11:17:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.913 ************************************ 00:06:05.913 END TEST event_reactor_perf 00:06:05.913 ************************************ 00:06:05.913 11:17:48 event -- event/event.sh@49 -- # uname -s 00:06:05.913 11:17:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:05.913 11:17:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:05.913 11:17:48 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.913 11:17:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.913 11:17:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.913 ************************************ 00:06:05.913 START TEST event_scheduler 00:06:05.913 ************************************ 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:05.913 * Looking for test storage... 00:06:05.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:05.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.913 11:17:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.913 --rc genhtml_branch_coverage=1 00:06:05.913 --rc genhtml_function_coverage=1 00:06:05.913 --rc genhtml_legend=1 00:06:05.913 --rc geninfo_all_blocks=1 00:06:05.913 --rc geninfo_unexecuted_blocks=1 00:06:05.913 00:06:05.913 ' 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.913 --rc genhtml_branch_coverage=1 00:06:05.913 --rc genhtml_function_coverage=1 00:06:05.913 --rc genhtml_legend=1 00:06:05.913 --rc geninfo_all_blocks=1 00:06:05.913 --rc geninfo_unexecuted_blocks=1 00:06:05.913 00:06:05.913 ' 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.913 --rc genhtml_branch_coverage=1 00:06:05.913 --rc genhtml_function_coverage=1 00:06:05.913 --rc genhtml_legend=1 00:06:05.913 --rc geninfo_all_blocks=1 00:06:05.913 --rc geninfo_unexecuted_blocks=1 00:06:05.913 00:06:05.913 ' 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.913 --rc genhtml_branch_coverage=1 00:06:05.913 --rc genhtml_function_coverage=1 00:06:05.913 --rc genhtml_legend=1 00:06:05.913 --rc geninfo_all_blocks=1 00:06:05.913 --rc geninfo_unexecuted_blocks=1 00:06:05.913 00:06:05.913 ' 00:06:05.913 11:17:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:05.913 11:17:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58059 00:06:05.913 11:17:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.913 11:17:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58059 00:06:05.913 11:17:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58059 ']' 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:05.913 11:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.172 [2024-11-15 11:17:48.904317] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:06.172 [2024-11-15 11:17:48.904779] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58059 ] 00:06:06.172 [2024-11-15 11:17:49.096829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.431 [2024-11-15 11:17:49.279279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.431 [2024-11-15 11:17:49.279665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.431 [2024-11-15 11:17:49.279851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.431 [2024-11-15 11:17:49.279429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.000 11:17:49 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.000 11:17:49 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:07.000 11:17:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.000 11:17:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.000 11:17:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.000 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.000 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.000 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.000 POWER: Cannot set governor of lcore 0 to performance 00:06:07.000 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.000 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.000 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.000 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.000 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:07.000 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:07.000 POWER: Unable to set Power Management Environment for lcore 0 00:06:07.000 [2024-11-15 11:17:49.824677] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:07.000 [2024-11-15 11:17:49.824713] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:07.000 [2024-11-15 11:17:49.824744] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:07.000 [2024-11-15 11:17:49.824779] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:07.000 [2024-11-15 11:17:49.824794] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:07.000 [2024-11-15 11:17:49.824807] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:07.000 11:17:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.000 11:17:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.000 11:17:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.000 11:17:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.259 [2024-11-15 11:17:50.162934] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.259 11:17:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.259 11:17:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.259 11:17:50 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:07.259 11:17:50 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.259 11:17:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.259 ************************************ 00:06:07.259 START TEST scheduler_create_thread 00:06:07.259 ************************************ 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.259 2 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.259 3 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.259 4 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.259 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.518 5 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.518 6 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.518 7 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.518 8 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.518 9 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.518 10 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.518 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.519 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.896 11:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.896 11:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.896 11:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.896 11:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.896 11:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.273 11:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.273 ************************************ 00:06:10.273 END TEST scheduler_create_thread 00:06:10.273 ************************************ 00:06:10.273 00:06:10.273 real 0m2.620s 00:06:10.273 user 0m0.019s 00:06:10.273 sys 0m0.007s 00:06:10.273 11:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.273 11:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.273 11:17:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.273 11:17:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58059 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58059 ']' 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58059 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58059 00:06:10.273 killing process with pid 58059 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58059' 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58059 00:06:10.273 11:17:52 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58059 00:06:10.532 [2024-11-15 11:17:53.274269] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.471 00:06:11.471 real 0m5.740s 00:06:11.471 user 0m9.627s 00:06:11.471 sys 0m0.602s 00:06:11.471 11:17:54 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.471 ************************************ 00:06:11.471 END TEST event_scheduler 00:06:11.471 11:17:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.471 ************************************ 00:06:11.471 11:17:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.471 11:17:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.471 11:17:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:11.471 11:17:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:11.471 11:17:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.471 ************************************ 00:06:11.471 START TEST app_repeat 00:06:11.471 ************************************ 00:06:11.471 11:17:54 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58176 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58176' 00:06:11.471 Process app_repeat pid: 58176 00:06:11.471 spdk_app_start Round 0 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.471 11:17:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58176 /var/tmp/spdk-nbd.sock 00:06:11.471 11:17:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58176 ']' 00:06:11.471 11:17:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.471 11:17:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:11.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.471 11:17:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.471 11:17:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:11.471 11:17:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.729 [2024-11-15 11:17:54.469221] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:11.729 [2024-11-15 11:17:54.469404] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58176 ] 00:06:11.729 [2024-11-15 11:17:54.656876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.988 [2024-11-15 11:17:54.798863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.988 [2024-11-15 11:17:54.798878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.922 11:17:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.922 11:17:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:12.922 11:17:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.922 Malloc0 00:06:12.922 11:17:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.490 Malloc1 00:06:13.490 11:17:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.490 11:17:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.749 /dev/nbd0 00:06:13.749 11:17:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.749 11:17:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.749 11:17:56 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:13.749 11:17:56 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:13.749 11:17:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.749 11:17:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.749 11:17:56 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.750 1+0 records in 00:06:13.750 1+0 records out 00:06:13.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581326 s, 7.0 MB/s 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.750 11:17:56 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:13.750 11:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.750 11:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.750 11:17:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.008 /dev/nbd1 00:06:14.008 11:17:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.008 11:17:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.008 1+0 records in 00:06:14.008 1+0 records out 00:06:14.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440769 s, 9.3 MB/s 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.008 11:17:56 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:14.008 11:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.008 11:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.008 11:17:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.008 11:17:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.008 11:17:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.267 { 00:06:14.267 "nbd_device": "/dev/nbd0", 00:06:14.267 "bdev_name": "Malloc0" 00:06:14.267 }, 00:06:14.267 { 00:06:14.267 "nbd_device": "/dev/nbd1", 00:06:14.267 "bdev_name": "Malloc1" 00:06:14.267 } 00:06:14.267 ]' 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.267 { 00:06:14.267 "nbd_device": "/dev/nbd0", 00:06:14.267 "bdev_name": "Malloc0" 00:06:14.267 }, 00:06:14.267 { 00:06:14.267 "nbd_device": "/dev/nbd1", 00:06:14.267 "bdev_name": "Malloc1" 00:06:14.267 } 00:06:14.267 ]' 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.267 /dev/nbd1' 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.267 /dev/nbd1' 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.267 256+0 records in 00:06:14.267 256+0 records out 00:06:14.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101489 s, 103 MB/s 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.267 11:17:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.267 256+0 records in 00:06:14.267 256+0 records out 00:06:14.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293279 s, 35.8 MB/s 00:06:14.268 11:17:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.268 11:17:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.527 256+0 records in 00:06:14.527 256+0 records out 00:06:14.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0358792 s, 29.2 MB/s 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.527 11:17:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.786 11:17:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.045 11:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.304 11:17:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.304 11:17:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.562 11:17:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.939 [2024-11-15 11:17:59.559810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.940 [2024-11-15 11:17:59.682544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.940 [2024-11-15 11:17:59.682556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.197 [2024-11-15 11:17:59.893349] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.197 [2024-11-15 11:17:59.893460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.575 spdk_app_start Round 1 00:06:18.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.575 11:18:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.575 11:18:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.575 11:18:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58176 /var/tmp/spdk-nbd.sock 00:06:18.575 11:18:01 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58176 ']' 00:06:18.575 11:18:01 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.575 11:18:01 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.575 11:18:01 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.575 11:18:01 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.575 11:18:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.142 11:18:01 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.142 11:18:01 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:19.142 11:18:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.400 Malloc0 00:06:19.400 11:18:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.659 Malloc1 00:06:19.659 11:18:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.659 11:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.918 /dev/nbd0 00:06:19.918 11:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.918 11:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.918 1+0 records in 00:06:19.918 1+0 records out 00:06:19.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345362 s, 11.9 MB/s 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:19.918 11:18:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:19.918 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.918 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.918 11:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.177 /dev/nbd1 00:06:20.177 11:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.177 11:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.177 1+0 records in 00:06:20.177 1+0 records out 00:06:20.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411583 s, 10.0 MB/s 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:20.177 11:18:03 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:20.177 11:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.177 11:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.177 11:18:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.177 11:18:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.177 11:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.745 { 00:06:20.745 "nbd_device": "/dev/nbd0", 00:06:20.745 "bdev_name": "Malloc0" 00:06:20.745 }, 00:06:20.745 { 00:06:20.745 "nbd_device": "/dev/nbd1", 00:06:20.745 "bdev_name": "Malloc1" 00:06:20.745 } 00:06:20.745 ]' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.745 { 00:06:20.745 "nbd_device": "/dev/nbd0", 00:06:20.745 "bdev_name": "Malloc0" 00:06:20.745 }, 00:06:20.745 { 00:06:20.745 "nbd_device": "/dev/nbd1", 00:06:20.745 "bdev_name": "Malloc1" 00:06:20.745 } 00:06:20.745 ]' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.745 /dev/nbd1' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.745 /dev/nbd1' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.745 256+0 records in 00:06:20.745 256+0 records out 00:06:20.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00701231 s, 150 MB/s 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.745 256+0 records in 00:06:20.745 256+0 records out 00:06:20.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225996 s, 46.4 MB/s 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.745 256+0 records in 00:06:20.745 256+0 records out 00:06:20.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032041 s, 32.7 MB/s 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.745 11:18:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.746 11:18:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.005 11:18:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.264 11:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.523 11:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.523 11:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.523 11:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.782 11:18:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.782 11:18:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.039 11:18:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.416 [2024-11-15 11:18:06.019026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.416 [2024-11-15 11:18:06.140692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.416 [2024-11-15 11:18:06.140693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.416 [2024-11-15 11:18:06.334427] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.416 [2024-11-15 11:18:06.334570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.319 11:18:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.319 spdk_app_start Round 2 00:06:25.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.319 11:18:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:25.319 11:18:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58176 /var/tmp/spdk-nbd.sock 00:06:25.319 11:18:07 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58176 ']' 00:06:25.319 11:18:07 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.319 11:18:07 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.319 11:18:07 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.319 11:18:07 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.319 11:18:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.319 11:18:08 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.319 11:18:08 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:25.319 11:18:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.887 Malloc0 00:06:25.887 11:18:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.147 Malloc1 00:06:26.147 11:18:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.147 11:18:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.406 /dev/nbd0 00:06:26.406 11:18:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.406 11:18:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.406 1+0 records in 00:06:26.406 1+0 records out 00:06:26.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265584 s, 15.4 MB/s 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:26.406 11:18:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:26.406 11:18:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.406 11:18:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.406 11:18:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.682 /dev/nbd1 00:06:26.682 11:18:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.682 11:18:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.682 1+0 records in 00:06:26.682 1+0 records out 00:06:26.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338642 s, 12.1 MB/s 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:26.682 11:18:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:26.682 11:18:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.682 11:18:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.682 11:18:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.682 11:18:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.682 11:18:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.941 11:18:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.941 { 00:06:26.941 "nbd_device": "/dev/nbd0", 00:06:26.941 "bdev_name": "Malloc0" 00:06:26.941 }, 00:06:26.941 { 00:06:26.941 "nbd_device": "/dev/nbd1", 00:06:26.941 "bdev_name": "Malloc1" 00:06:26.941 } 00:06:26.941 ]' 00:06:26.941 11:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.941 { 00:06:26.941 "nbd_device": "/dev/nbd0", 00:06:26.941 "bdev_name": "Malloc0" 00:06:26.941 }, 00:06:26.941 { 00:06:26.941 "nbd_device": "/dev/nbd1", 00:06:26.941 "bdev_name": "Malloc1" 00:06:26.941 } 00:06:26.941 ]' 00:06:26.941 11:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.941 11:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.941 /dev/nbd1' 00:06:26.941 11:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.941 /dev/nbd1' 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.200 256+0 records in 00:06:27.200 256+0 records out 00:06:27.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650669 s, 161 MB/s 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.200 256+0 records in 00:06:27.200 256+0 records out 00:06:27.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245261 s, 42.8 MB/s 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.200 256+0 records in 00:06:27.200 256+0 records out 00:06:27.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280776 s, 37.3 MB/s 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.200 11:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.201 11:18:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.459 11:18:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.719 11:18:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.978 11:18:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.978 11:18:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.546 11:18:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.483 [2024-11-15 11:18:12.389852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.740 [2024-11-15 11:18:12.513272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.740 [2024-11-15 11:18:12.513283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.999 [2024-11-15 11:18:12.699137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.999 [2024-11-15 11:18:12.699268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.943 11:18:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58176 /var/tmp/spdk-nbd.sock 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58176 ']' 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:31.943 11:18:14 event.app_repeat -- event/event.sh@39 -- # killprocess 58176 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58176 ']' 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58176 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58176 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:31.943 killing process with pid 58176 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58176' 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58176 00:06:31.943 11:18:14 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58176 00:06:32.882 spdk_app_start is called in Round 0. 00:06:32.882 Shutdown signal received, stop current app iteration 00:06:32.882 Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 reinitialization... 00:06:32.882 spdk_app_start is called in Round 1. 00:06:32.882 Shutdown signal received, stop current app iteration 00:06:32.882 Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 reinitialization... 00:06:32.882 spdk_app_start is called in Round 2. 00:06:32.882 Shutdown signal received, stop current app iteration 00:06:32.882 Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 reinitialization... 00:06:32.882 spdk_app_start is called in Round 3. 00:06:32.882 Shutdown signal received, stop current app iteration 00:06:32.882 11:18:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:32.882 11:18:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:32.882 00:06:32.882 real 0m21.253s 00:06:32.882 user 0m46.656s 00:06:32.882 sys 0m3.261s 00:06:32.882 11:18:15 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.882 11:18:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.882 ************************************ 00:06:32.882 END TEST app_repeat 00:06:32.882 ************************************ 00:06:32.882 11:18:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:32.882 11:18:15 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:32.882 11:18:15 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:32.882 11:18:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.882 11:18:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.882 ************************************ 00:06:32.882 START TEST cpu_locks 00:06:32.882 ************************************ 00:06:32.882 11:18:15 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:32.882 * Looking for test storage... 00:06:32.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:32.882 11:18:15 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.882 11:18:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.883 11:18:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.142 11:18:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.142 --rc genhtml_branch_coverage=1 00:06:33.142 --rc genhtml_function_coverage=1 00:06:33.142 --rc genhtml_legend=1 00:06:33.142 --rc geninfo_all_blocks=1 00:06:33.142 --rc geninfo_unexecuted_blocks=1 00:06:33.142 00:06:33.142 ' 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.142 --rc genhtml_branch_coverage=1 00:06:33.142 --rc genhtml_function_coverage=1 00:06:33.142 --rc genhtml_legend=1 00:06:33.142 --rc geninfo_all_blocks=1 00:06:33.142 --rc geninfo_unexecuted_blocks=1 00:06:33.142 00:06:33.142 ' 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.142 --rc genhtml_branch_coverage=1 00:06:33.142 --rc genhtml_function_coverage=1 00:06:33.142 --rc genhtml_legend=1 00:06:33.142 --rc geninfo_all_blocks=1 00:06:33.142 --rc geninfo_unexecuted_blocks=1 00:06:33.142 00:06:33.142 ' 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.142 --rc genhtml_branch_coverage=1 00:06:33.142 --rc genhtml_function_coverage=1 00:06:33.142 --rc genhtml_legend=1 00:06:33.142 --rc geninfo_all_blocks=1 00:06:33.142 --rc geninfo_unexecuted_blocks=1 00:06:33.142 00:06:33.142 ' 00:06:33.142 11:18:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.142 11:18:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.142 11:18:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.142 11:18:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.142 11:18:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.142 ************************************ 00:06:33.142 START TEST default_locks 00:06:33.142 ************************************ 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58644 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58644 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58644 ']' 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.142 11:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.142 [2024-11-15 11:18:16.046851] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:33.142 [2024-11-15 11:18:16.047045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58644 ] 00:06:33.400 [2024-11-15 11:18:16.240650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.659 [2024-11-15 11:18:16.409099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.593 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.593 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:34.593 11:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58644 00:06:34.593 11:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58644 00:06:34.593 11:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58644 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58644 ']' 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58644 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58644 00:06:34.852 killing process with pid 58644 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58644' 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58644 00:06:34.852 11:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58644 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58644 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58644 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58644 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58644 ']' 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 ERROR: process (pid: 58644) is no longer running 00:06:37.386 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58644) - No such process 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.386 00:06:37.386 real 0m3.844s 00:06:37.386 user 0m3.758s 00:06:37.386 sys 0m0.768s 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.386 11:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 ************************************ 00:06:37.386 END TEST default_locks 00:06:37.386 ************************************ 00:06:37.386 11:18:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.386 11:18:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:37.386 11:18:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.386 11:18:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 ************************************ 00:06:37.386 START TEST default_locks_via_rpc 00:06:37.386 ************************************ 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58715 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58715 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58715 ']' 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.386 11:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.387 [2024-11-15 11:18:19.949483] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:37.387 [2024-11-15 11:18:19.949678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58715 ] 00:06:37.387 [2024-11-15 11:18:20.130813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.387 [2024-11-15 11:18:20.266612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58715 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.329 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58715 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58715 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58715 ']' 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58715 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58715 00:06:38.906 killing process with pid 58715 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58715' 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58715 00:06:38.906 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58715 00:06:41.438 00:06:41.438 real 0m3.954s 00:06:41.438 user 0m3.852s 00:06:41.438 sys 0m0.806s 00:06:41.438 11:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.438 ************************************ 00:06:41.438 11:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.438 END TEST default_locks_via_rpc 00:06:41.438 ************************************ 00:06:41.438 11:18:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:41.438 11:18:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.438 11:18:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.438 11:18:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.438 ************************************ 00:06:41.438 START TEST non_locking_app_on_locked_coremask 00:06:41.438 ************************************ 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:41.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58789 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58789 /var/tmp/spdk.sock 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58789 ']' 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.438 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.438 [2024-11-15 11:18:23.954114] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:41.439 [2024-11-15 11:18:23.954586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58789 ] 00:06:41.439 [2024-11-15 11:18:24.125708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.439 [2024-11-15 11:18:24.256700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58805 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58805 /var/tmp/spdk2.sock 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58805 ']' 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.375 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.375 [2024-11-15 11:18:25.263980] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:42.375 [2024-11-15 11:18:25.264547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58805 ] 00:06:42.634 [2024-11-15 11:18:25.454730] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.634 [2024-11-15 11:18:25.454845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.892 [2024-11-15 11:18:25.721716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.425 11:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.425 11:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:45.425 11:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58789 00:06:45.425 11:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58789 00:06:45.425 11:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58789 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58789 ']' 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58789 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58789 00:06:45.994 killing process with pid 58789 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58789' 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58789 00:06:45.994 11:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58789 00:06:50.189 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58805 00:06:50.189 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58805 ']' 00:06:50.189 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58805 00:06:50.189 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:50.189 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:50.189 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58805 00:06:50.448 killing process with pid 58805 00:06:50.448 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.448 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.448 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58805' 00:06:50.448 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58805 00:06:50.448 11:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58805 00:06:52.353 ************************************ 00:06:52.353 END TEST non_locking_app_on_locked_coremask 00:06:52.353 ************************************ 00:06:52.353 00:06:52.353 real 0m11.435s 00:06:52.353 user 0m11.736s 00:06:52.353 sys 0m1.746s 00:06:52.353 11:18:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.353 11:18:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.612 11:18:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.612 11:18:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.612 11:18:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.612 11:18:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.612 ************************************ 00:06:52.612 START TEST locking_app_on_unlocked_coremask 00:06:52.612 ************************************ 00:06:52.612 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:52.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.612 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58955 00:06:52.612 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58955 /var/tmp/spdk.sock 00:06:52.612 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.612 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58955 ']' 00:06:52.612 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.612 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.613 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.613 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.613 11:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.613 [2024-11-15 11:18:35.455917] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:52.613 [2024-11-15 11:18:35.456713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:06:52.871 [2024-11-15 11:18:35.638252] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.871 [2024-11-15 11:18:35.638575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.871 [2024-11-15 11:18:35.766978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58971 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58971 /var/tmp/spdk2.sock 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58971 ']' 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.852 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.852 [2024-11-15 11:18:36.771002] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:53.852 [2024-11-15 11:18:36.771658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ] 00:06:54.111 [2024-11-15 11:18:36.968573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.369 [2024-11-15 11:18:37.236883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.902 11:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.902 11:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:56.902 11:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58971 00:06:56.902 11:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58971 00:06:56.902 11:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58955 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58955 ']' 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58955 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58955 00:06:57.469 killing process with pid 58955 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58955' 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58955 00:06:57.469 11:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58955 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58971 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58971 ']' 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58971 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58971 00:07:02.742 killing process with pid 58971 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58971' 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58971 00:07:02.742 11:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58971 00:07:04.120 00:07:04.120 real 0m11.538s 00:07:04.120 user 0m11.919s 00:07:04.120 sys 0m1.668s 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.120 ************************************ 00:07:04.120 END TEST locking_app_on_unlocked_coremask 00:07:04.120 ************************************ 00:07:04.120 11:18:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:04.120 11:18:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.120 11:18:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.120 11:18:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.120 ************************************ 00:07:04.120 START TEST locking_app_on_locked_coremask 00:07:04.120 ************************************ 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:04.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59124 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59124 /var/tmp/spdk.sock 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59124 ']' 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.120 11:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.120 [2024-11-15 11:18:47.029709] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:04.120 [2024-11-15 11:18:47.030154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59124 ] 00:07:04.378 [2024-11-15 11:18:47.201791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.638 [2024-11-15 11:18:47.340350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59140 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59140 /var/tmp/spdk2.sock 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59140 /var/tmp/spdk2.sock 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:05.573 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.574 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59140 /var/tmp/spdk2.sock 00:07:05.574 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59140 ']' 00:07:05.574 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.574 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:05.574 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.574 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:05.574 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.574 [2024-11-15 11:18:48.359957] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:05.574 [2024-11-15 11:18:48.360506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59140 ] 00:07:05.832 [2024-11-15 11:18:48.553019] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59124 has claimed it. 00:07:05.833 [2024-11-15 11:18:48.553091] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.091 ERROR: process (pid: 59140) is no longer running 00:07:06.091 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59140) - No such process 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59124 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59124 00:07:06.091 11:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.351 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59124 00:07:06.351 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59124 ']' 00:07:06.351 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59124 00:07:06.351 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:06.610 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:06.610 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59124 00:07:06.610 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:06.610 killing process with pid 59124 00:07:06.610 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:06.610 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59124' 00:07:06.610 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59124 00:07:06.610 11:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59124 00:07:09.142 00:07:09.142 real 0m4.582s 00:07:09.142 user 0m4.805s 00:07:09.142 sys 0m0.945s 00:07:09.142 11:18:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.142 ************************************ 00:07:09.142 END TEST locking_app_on_locked_coremask 00:07:09.142 ************************************ 00:07:09.142 11:18:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.142 11:18:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:09.142 11:18:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.142 11:18:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.142 11:18:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.142 ************************************ 00:07:09.142 START TEST locking_overlapped_coremask 00:07:09.142 ************************************ 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59210 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59210 /var/tmp/spdk.sock 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59210 ']' 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.142 11:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.142 [2024-11-15 11:18:51.702044] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:09.143 [2024-11-15 11:18:51.702333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59210 ] 00:07:09.143 [2024-11-15 11:18:51.878751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.143 [2024-11-15 11:18:52.014285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.143 [2024-11-15 11:18:52.014379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.143 [2024-11-15 11:18:52.014397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59233 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59233 /var/tmp/spdk2.sock 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59233 /var/tmp/spdk2.sock 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59233 /var/tmp/spdk2.sock 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59233 ']' 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.076 11:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.334 [2024-11-15 11:18:53.077523] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:10.334 [2024-11-15 11:18:53.078006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59233 ] 00:07:10.334 [2024-11-15 11:18:53.269563] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59210 has claimed it. 00:07:10.334 [2024-11-15 11:18:53.269671] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.900 ERROR: process (pid: 59233) is no longer running 00:07:10.900 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59233) - No such process 00:07:10.900 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.900 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:10.900 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:10.900 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.900 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.900 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.900 11:18:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59210 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59210 ']' 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59210 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59210 00:07:10.901 killing process with pid 59210 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59210' 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59210 00:07:10.901 11:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59210 00:07:13.436 ************************************ 00:07:13.436 END TEST locking_overlapped_coremask 00:07:13.436 ************************************ 00:07:13.436 00:07:13.436 real 0m4.354s 00:07:13.436 user 0m11.727s 00:07:13.436 sys 0m0.825s 00:07:13.436 11:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.436 11:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.436 11:18:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:13.436 11:18:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:13.436 11:18:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.436 11:18:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.436 ************************************ 00:07:13.436 START TEST locking_overlapped_coremask_via_rpc 00:07:13.436 ************************************ 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59297 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59297 /var/tmp/spdk.sock 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59297 ']' 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:13.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.437 11:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.437 [2024-11-15 11:18:56.117919] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:13.437 [2024-11-15 11:18:56.118573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:07:13.437 [2024-11-15 11:18:56.302158] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.437 [2024-11-15 11:18:56.302246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.695 [2024-11-15 11:18:56.438377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.695 [2024-11-15 11:18:56.438486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.695 [2024-11-15 11:18:56.438495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59315 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59315 /var/tmp/spdk2.sock 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59315 ']' 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.629 11:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.629 [2024-11-15 11:18:57.476444] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:14.629 [2024-11-15 11:18:57.476913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:07:14.887 [2024-11-15 11:18:57.671279] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.887 [2024-11-15 11:18:57.671382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.148 [2024-11-15 11:18:57.953964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.148 [2024-11-15 11:18:57.957295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.148 [2024-11-15 11:18:57.957300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.686 [2024-11-15 11:19:00.265423] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59297 has claimed it. 00:07:17.686 request: 00:07:17.686 { 00:07:17.686 "method": "framework_enable_cpumask_locks", 00:07:17.686 "req_id": 1 00:07:17.686 } 00:07:17.686 Got JSON-RPC error response 00:07:17.686 response: 00:07:17.686 { 00:07:17.686 "code": -32603, 00:07:17.686 "message": "Failed to claim CPU core: 2" 00:07:17.686 } 00:07:17.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59297 /var/tmp/spdk.sock 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59297 ']' 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59315 /var/tmp/spdk2.sock 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59315 ']' 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.686 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.945 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.945 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:17.945 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:17.945 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:17.945 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:17.945 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:17.945 00:07:17.945 real 0m4.862s 00:07:17.945 user 0m1.686s 00:07:17.945 sys 0m0.237s 00:07:17.945 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.945 11:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.945 ************************************ 00:07:17.945 END TEST locking_overlapped_coremask_via_rpc 00:07:17.945 ************************************ 00:07:17.945 11:19:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:17.945 11:19:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59297 ]] 00:07:17.945 11:19:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59297 00:07:17.945 11:19:00 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59297 ']' 00:07:17.945 11:19:00 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59297 00:07:17.945 11:19:00 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:17.945 11:19:00 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:17.945 11:19:00 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59297 00:07:18.203 killing process with pid 59297 00:07:18.203 11:19:00 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:18.203 11:19:00 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:18.203 11:19:00 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59297' 00:07:18.203 11:19:00 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59297 00:07:18.203 11:19:00 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59297 00:07:20.734 11:19:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59315 ]] 00:07:20.734 11:19:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59315 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59315 ']' 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59315 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59315 00:07:20.734 killing process with pid 59315 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59315' 00:07:20.734 11:19:03 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59315 00:07:20.735 11:19:03 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59315 00:07:22.636 11:19:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.636 11:19:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:22.636 11:19:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59297 ]] 00:07:22.636 11:19:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59297 00:07:22.636 Process with pid 59297 is not found 00:07:22.636 Process with pid 59315 is not found 00:07:22.636 11:19:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59297 ']' 00:07:22.636 11:19:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59297 00:07:22.636 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59297) - No such process 00:07:22.636 11:19:05 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59297 is not found' 00:07:22.636 11:19:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59315 ]] 00:07:22.636 11:19:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59315 00:07:22.636 11:19:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59315 ']' 00:07:22.636 11:19:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59315 00:07:22.636 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59315) - No such process 00:07:22.636 11:19:05 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59315 is not found' 00:07:22.636 11:19:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.636 00:07:22.636 real 0m49.764s 00:07:22.636 user 1m25.630s 00:07:22.636 sys 0m8.464s 00:07:22.636 11:19:05 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.636 11:19:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.636 ************************************ 00:07:22.636 END TEST cpu_locks 00:07:22.636 ************************************ 00:07:22.636 00:07:22.636 real 1m22.067s 00:07:22.636 user 2m29.248s 00:07:22.636 sys 0m12.921s 00:07:22.636 11:19:05 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.636 11:19:05 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.636 ************************************ 00:07:22.636 END TEST event 00:07:22.636 ************************************ 00:07:22.636 11:19:05 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:22.636 11:19:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.636 11:19:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.636 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:07:22.636 ************************************ 00:07:22.636 START TEST thread 00:07:22.636 ************************************ 00:07:22.636 11:19:05 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:22.895 * Looking for test storage... 00:07:22.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:22.895 11:19:05 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.895 11:19:05 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.895 11:19:05 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.895 11:19:05 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.895 11:19:05 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.895 11:19:05 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.895 11:19:05 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.895 11:19:05 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.895 11:19:05 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.895 11:19:05 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.895 11:19:05 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.895 11:19:05 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.895 11:19:05 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.895 11:19:05 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.895 11:19:05 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.895 11:19:05 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:22.895 11:19:05 thread -- scripts/common.sh@345 -- # : 1 00:07:22.895 11:19:05 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.895 11:19:05 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.895 11:19:05 thread -- scripts/common.sh@365 -- # decimal 1 00:07:22.895 11:19:05 thread -- scripts/common.sh@353 -- # local d=1 00:07:22.895 11:19:05 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.895 11:19:05 thread -- scripts/common.sh@355 -- # echo 1 00:07:22.895 11:19:05 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.895 11:19:05 thread -- scripts/common.sh@366 -- # decimal 2 00:07:22.895 11:19:05 thread -- scripts/common.sh@353 -- # local d=2 00:07:22.895 11:19:05 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.895 11:19:05 thread -- scripts/common.sh@355 -- # echo 2 00:07:22.895 11:19:05 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.895 11:19:05 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.895 11:19:05 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.895 11:19:05 thread -- scripts/common.sh@368 -- # return 0 00:07:22.895 11:19:05 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.895 11:19:05 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.895 --rc genhtml_branch_coverage=1 00:07:22.895 --rc genhtml_function_coverage=1 00:07:22.895 --rc genhtml_legend=1 00:07:22.896 --rc geninfo_all_blocks=1 00:07:22.896 --rc geninfo_unexecuted_blocks=1 00:07:22.896 00:07:22.896 ' 00:07:22.896 11:19:05 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.896 --rc genhtml_branch_coverage=1 00:07:22.896 --rc genhtml_function_coverage=1 00:07:22.896 --rc genhtml_legend=1 00:07:22.896 --rc geninfo_all_blocks=1 00:07:22.896 --rc geninfo_unexecuted_blocks=1 00:07:22.896 00:07:22.896 ' 00:07:22.896 11:19:05 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.896 --rc genhtml_branch_coverage=1 00:07:22.896 --rc genhtml_function_coverage=1 00:07:22.896 --rc genhtml_legend=1 00:07:22.896 --rc geninfo_all_blocks=1 00:07:22.896 --rc geninfo_unexecuted_blocks=1 00:07:22.896 00:07:22.896 ' 00:07:22.896 11:19:05 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.896 --rc genhtml_branch_coverage=1 00:07:22.896 --rc genhtml_function_coverage=1 00:07:22.896 --rc genhtml_legend=1 00:07:22.896 --rc geninfo_all_blocks=1 00:07:22.896 --rc geninfo_unexecuted_blocks=1 00:07:22.896 00:07:22.896 ' 00:07:22.896 11:19:05 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.896 11:19:05 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:22.896 11:19:05 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.896 11:19:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.896 ************************************ 00:07:22.896 START TEST thread_poller_perf 00:07:22.896 ************************************ 00:07:22.896 11:19:05 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.896 [2024-11-15 11:19:05.801924] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:22.896 [2024-11-15 11:19:05.802436] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:07:23.154 [2024-11-15 11:19:05.998782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.413 [2024-11-15 11:19:06.168639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.413 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.860 [2024-11-15T11:19:07.810Z] ====================================== 00:07:24.860 [2024-11-15T11:19:07.810Z] busy:2213109808 (cyc) 00:07:24.860 [2024-11-15T11:19:07.810Z] total_run_count: 346000 00:07:24.860 [2024-11-15T11:19:07.810Z] tsc_hz: 2200000000 (cyc) 00:07:24.860 [2024-11-15T11:19:07.810Z] ====================================== 00:07:24.860 [2024-11-15T11:19:07.810Z] poller_cost: 6396 (cyc), 2907 (nsec) 00:07:24.860 00:07:24.860 real 0m1.660s 00:07:24.860 ************************************ 00:07:24.860 END TEST thread_poller_perf 00:07:24.860 ************************************ 00:07:24.860 user 0m1.425s 00:07:24.860 sys 0m0.125s 00:07:24.860 11:19:07 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.860 11:19:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.860 11:19:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.860 11:19:07 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:24.860 11:19:07 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.860 11:19:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.860 ************************************ 00:07:24.860 START TEST thread_poller_perf 00:07:24.860 ************************************ 00:07:24.860 11:19:07 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.860 [2024-11-15 11:19:07.516072] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:24.860 [2024-11-15 11:19:07.516305] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59551 ] 00:07:24.860 [2024-11-15 11:19:07.701456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.118 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.118 [2024-11-15 11:19:07.840145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.495 [2024-11-15T11:19:09.445Z] ====================================== 00:07:26.495 [2024-11-15T11:19:09.445Z] busy:2203811572 (cyc) 00:07:26.495 [2024-11-15T11:19:09.445Z] total_run_count: 4482000 00:07:26.495 [2024-11-15T11:19:09.445Z] tsc_hz: 2200000000 (cyc) 00:07:26.495 [2024-11-15T11:19:09.445Z] ====================================== 00:07:26.495 [2024-11-15T11:19:09.445Z] poller_cost: 491 (cyc), 223 (nsec) 00:07:26.495 00:07:26.495 real 0m1.599s 00:07:26.495 user 0m1.366s 00:07:26.495 sys 0m0.122s 00:07:26.495 ************************************ 00:07:26.495 END TEST thread_poller_perf 00:07:26.495 ************************************ 00:07:26.495 11:19:09 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.495 11:19:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.495 11:19:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:26.495 ************************************ 00:07:26.495 END TEST thread 00:07:26.495 ************************************ 00:07:26.495 00:07:26.495 real 0m3.551s 00:07:26.495 user 0m2.937s 00:07:26.495 sys 0m0.389s 00:07:26.495 11:19:09 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.495 11:19:09 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.495 11:19:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:26.495 11:19:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.495 11:19:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.495 11:19:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.495 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:07:26.495 ************************************ 00:07:26.495 START TEST app_cmdline 00:07:26.495 ************************************ 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.495 * Looking for test storage... 00:07:26.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.495 11:19:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:26.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.495 --rc genhtml_branch_coverage=1 00:07:26.495 --rc genhtml_function_coverage=1 00:07:26.495 --rc genhtml_legend=1 00:07:26.495 --rc geninfo_all_blocks=1 00:07:26.495 --rc geninfo_unexecuted_blocks=1 00:07:26.495 00:07:26.495 ' 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:26.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.495 --rc genhtml_branch_coverage=1 00:07:26.495 --rc genhtml_function_coverage=1 00:07:26.495 --rc genhtml_legend=1 00:07:26.495 --rc geninfo_all_blocks=1 00:07:26.495 --rc geninfo_unexecuted_blocks=1 00:07:26.495 00:07:26.495 ' 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:26.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.495 --rc genhtml_branch_coverage=1 00:07:26.495 --rc genhtml_function_coverage=1 00:07:26.495 --rc genhtml_legend=1 00:07:26.495 --rc geninfo_all_blocks=1 00:07:26.495 --rc geninfo_unexecuted_blocks=1 00:07:26.495 00:07:26.495 ' 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:26.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.495 --rc genhtml_branch_coverage=1 00:07:26.495 --rc genhtml_function_coverage=1 00:07:26.495 --rc genhtml_legend=1 00:07:26.495 --rc geninfo_all_blocks=1 00:07:26.495 --rc geninfo_unexecuted_blocks=1 00:07:26.495 00:07:26.495 ' 00:07:26.495 11:19:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:26.495 11:19:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59639 00:07:26.495 11:19:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:26.495 11:19:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59639 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59639 ']' 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.495 11:19:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.755 [2024-11-15 11:19:09.457661] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:26.755 [2024-11-15 11:19:09.458118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59639 ] 00:07:26.755 [2024-11-15 11:19:09.629677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.013 [2024-11-15 11:19:09.764027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.950 11:19:10 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:27.950 11:19:10 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:27.950 11:19:10 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:28.209 { 00:07:28.209 "version": "SPDK v25.01-pre git sha1 514198259", 00:07:28.209 "fields": { 00:07:28.209 "major": 25, 00:07:28.209 "minor": 1, 00:07:28.209 "patch": 0, 00:07:28.209 "suffix": "-pre", 00:07:28.209 "commit": "514198259" 00:07:28.209 } 00:07:28.209 } 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.209 11:19:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:28.209 11:19:10 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.468 request: 00:07:28.468 { 00:07:28.468 "method": "env_dpdk_get_mem_stats", 00:07:28.468 "req_id": 1 00:07:28.468 } 00:07:28.468 Got JSON-RPC error response 00:07:28.468 response: 00:07:28.468 { 00:07:28.468 "code": -32601, 00:07:28.468 "message": "Method not found" 00:07:28.468 } 00:07:28.468 11:19:11 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:28.468 11:19:11 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.469 11:19:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59639 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59639 ']' 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59639 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59639 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:28.469 killing process with pid 59639 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59639' 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@971 -- # kill 59639 00:07:28.469 11:19:11 app_cmdline -- common/autotest_common.sh@976 -- # wait 59639 00:07:31.002 00:07:31.002 real 0m4.321s 00:07:31.002 user 0m4.572s 00:07:31.002 sys 0m0.767s 00:07:31.002 ************************************ 00:07:31.002 END TEST app_cmdline 00:07:31.002 ************************************ 00:07:31.002 11:19:13 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.002 11:19:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.002 11:19:13 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:31.002 11:19:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.002 11:19:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.002 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:07:31.002 ************************************ 00:07:31.002 START TEST version 00:07:31.002 ************************************ 00:07:31.002 11:19:13 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:31.002 * Looking for test storage... 00:07:31.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:31.002 11:19:13 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:31.002 11:19:13 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:31.002 11:19:13 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:31.002 11:19:13 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:31.002 11:19:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.002 11:19:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.002 11:19:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.002 11:19:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.002 11:19:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.002 11:19:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.002 11:19:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.002 11:19:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.002 11:19:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.002 11:19:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.002 11:19:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.002 11:19:13 version -- scripts/common.sh@344 -- # case "$op" in 00:07:31.002 11:19:13 version -- scripts/common.sh@345 -- # : 1 00:07:31.002 11:19:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.002 11:19:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.002 11:19:13 version -- scripts/common.sh@365 -- # decimal 1 00:07:31.002 11:19:13 version -- scripts/common.sh@353 -- # local d=1 00:07:31.003 11:19:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.003 11:19:13 version -- scripts/common.sh@355 -- # echo 1 00:07:31.003 11:19:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.003 11:19:13 version -- scripts/common.sh@366 -- # decimal 2 00:07:31.003 11:19:13 version -- scripts/common.sh@353 -- # local d=2 00:07:31.003 11:19:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.003 11:19:13 version -- scripts/common.sh@355 -- # echo 2 00:07:31.003 11:19:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.003 11:19:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.003 11:19:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.003 11:19:13 version -- scripts/common.sh@368 -- # return 0 00:07:31.003 11:19:13 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.003 11:19:13 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:31.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.003 --rc genhtml_branch_coverage=1 00:07:31.003 --rc genhtml_function_coverage=1 00:07:31.003 --rc genhtml_legend=1 00:07:31.003 --rc geninfo_all_blocks=1 00:07:31.003 --rc geninfo_unexecuted_blocks=1 00:07:31.003 00:07:31.003 ' 00:07:31.003 11:19:13 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:31.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.003 --rc genhtml_branch_coverage=1 00:07:31.003 --rc genhtml_function_coverage=1 00:07:31.003 --rc genhtml_legend=1 00:07:31.003 --rc geninfo_all_blocks=1 00:07:31.003 --rc geninfo_unexecuted_blocks=1 00:07:31.003 00:07:31.003 ' 00:07:31.003 11:19:13 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:31.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.003 --rc genhtml_branch_coverage=1 00:07:31.003 --rc genhtml_function_coverage=1 00:07:31.003 --rc genhtml_legend=1 00:07:31.003 --rc geninfo_all_blocks=1 00:07:31.003 --rc geninfo_unexecuted_blocks=1 00:07:31.003 00:07:31.003 ' 00:07:31.003 11:19:13 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:31.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.003 --rc genhtml_branch_coverage=1 00:07:31.003 --rc genhtml_function_coverage=1 00:07:31.003 --rc genhtml_legend=1 00:07:31.003 --rc geninfo_all_blocks=1 00:07:31.003 --rc geninfo_unexecuted_blocks=1 00:07:31.003 00:07:31.003 ' 00:07:31.003 11:19:13 version -- app/version.sh@17 -- # get_header_version major 00:07:31.003 11:19:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:31.003 11:19:13 version -- app/version.sh@14 -- # cut -f2 00:07:31.003 11:19:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.003 11:19:13 version -- app/version.sh@17 -- # major=25 00:07:31.003 11:19:13 version -- app/version.sh@18 -- # get_header_version minor 00:07:31.003 11:19:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:31.003 11:19:13 version -- app/version.sh@14 -- # cut -f2 00:07:31.003 11:19:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.003 11:19:13 version -- app/version.sh@18 -- # minor=1 00:07:31.003 11:19:13 version -- app/version.sh@19 -- # get_header_version patch 00:07:31.003 11:19:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:31.003 11:19:13 version -- app/version.sh@14 -- # cut -f2 00:07:31.003 11:19:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.003 11:19:13 version -- app/version.sh@19 -- # patch=0 00:07:31.003 11:19:13 version -- app/version.sh@20 -- # get_header_version suffix 00:07:31.003 11:19:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:31.003 11:19:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.003 11:19:13 version -- app/version.sh@14 -- # cut -f2 00:07:31.003 11:19:13 version -- app/version.sh@20 -- # suffix=-pre 00:07:31.003 11:19:13 version -- app/version.sh@22 -- # version=25.1 00:07:31.003 11:19:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:31.003 11:19:13 version -- app/version.sh@28 -- # version=25.1rc0 00:07:31.003 11:19:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:31.003 11:19:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:31.003 11:19:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:31.003 11:19:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:31.003 00:07:31.003 real 0m0.273s 00:07:31.003 user 0m0.167s 00:07:31.003 sys 0m0.141s 00:07:31.003 ************************************ 00:07:31.003 END TEST version 00:07:31.003 ************************************ 00:07:31.003 11:19:13 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.003 11:19:13 version -- common/autotest_common.sh@10 -- # set +x 00:07:31.003 11:19:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:31.003 11:19:13 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:31.003 11:19:13 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:31.003 11:19:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.003 11:19:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.003 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:07:31.003 ************************************ 00:07:31.003 START TEST bdev_raid 00:07:31.003 ************************************ 00:07:31.003 11:19:13 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:31.003 * Looking for test storage... 00:07:31.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:31.262 11:19:13 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:31.262 11:19:13 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:07:31.262 11:19:13 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.262 11:19:14 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:31.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.262 --rc genhtml_branch_coverage=1 00:07:31.262 --rc genhtml_function_coverage=1 00:07:31.262 --rc genhtml_legend=1 00:07:31.262 --rc geninfo_all_blocks=1 00:07:31.262 --rc geninfo_unexecuted_blocks=1 00:07:31.262 00:07:31.262 ' 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:31.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.262 --rc genhtml_branch_coverage=1 00:07:31.262 --rc genhtml_function_coverage=1 00:07:31.262 --rc genhtml_legend=1 00:07:31.262 --rc geninfo_all_blocks=1 00:07:31.262 --rc geninfo_unexecuted_blocks=1 00:07:31.262 00:07:31.262 ' 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:31.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.262 --rc genhtml_branch_coverage=1 00:07:31.262 --rc genhtml_function_coverage=1 00:07:31.262 --rc genhtml_legend=1 00:07:31.262 --rc geninfo_all_blocks=1 00:07:31.262 --rc geninfo_unexecuted_blocks=1 00:07:31.262 00:07:31.262 ' 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:31.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.262 --rc genhtml_branch_coverage=1 00:07:31.262 --rc genhtml_function_coverage=1 00:07:31.262 --rc genhtml_legend=1 00:07:31.262 --rc geninfo_all_blocks=1 00:07:31.262 --rc geninfo_unexecuted_blocks=1 00:07:31.262 00:07:31.262 ' 00:07:31.262 11:19:14 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:31.262 11:19:14 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:31.262 11:19:14 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:31.262 11:19:14 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:31.262 11:19:14 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:31.262 11:19:14 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:31.262 11:19:14 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.262 11:19:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.262 ************************************ 00:07:31.262 START TEST raid1_resize_data_offset_test 00:07:31.262 ************************************ 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59832 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59832' 00:07:31.262 Process raid pid: 59832 00:07:31.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59832 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 59832 ']' 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:31.262 11:19:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.521 [2024-11-15 11:19:14.227115] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:31.521 [2024-11-15 11:19:14.227679] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.521 [2024-11-15 11:19:14.407737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.789 [2024-11-15 11:19:14.562668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.056 [2024-11-15 11:19:14.788933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.056 [2024-11-15 11:19:14.788990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.315 malloc0 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.315 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.574 malloc1 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.574 null0 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.574 [2024-11-15 11:19:15.358400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:32.574 [2024-11-15 11:19:15.360849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:32.574 [2024-11-15 11:19:15.360906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:32.574 [2024-11-15 11:19:15.361060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:32.574 [2024-11-15 11:19:15.361078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:32.574 [2024-11-15 11:19:15.361650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:32.574 [2024-11-15 11:19:15.362015] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:32.574 [2024-11-15 11:19:15.362246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:32.574 [2024-11-15 11:19:15.362689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:32.574 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 [2024-11-15 11:19:15.426684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.575 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 malloc2 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 [2024-11-15 11:19:15.976771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:33.142 [2024-11-15 11:19:15.994383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.142 [2024-11-15 11:19:15.997653] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.142 11:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.142 11:19:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:33.142 11:19:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59832 00:07:33.142 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 59832 ']' 00:07:33.142 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 59832 00:07:33.142 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:07:33.142 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:33.142 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59832 00:07:33.401 killing process with pid 59832 00:07:33.401 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:33.401 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:33.401 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59832' 00:07:33.401 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 59832 00:07:33.401 11:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 59832 00:07:33.401 [2024-11-15 11:19:16.093149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.401 [2024-11-15 11:19:16.093607] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:33.401 [2024-11-15 11:19:16.093685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.401 [2024-11-15 11:19:16.093719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:33.401 [2024-11-15 11:19:16.123843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.401 [2024-11-15 11:19:16.124302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.401 [2024-11-15 11:19:16.124338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:34.779 [2024-11-15 11:19:17.685850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.156 11:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:36.156 00:07:36.156 real 0m4.809s 00:07:36.156 user 0m4.610s 00:07:36.156 sys 0m0.771s 00:07:36.156 ************************************ 00:07:36.156 END TEST raid1_resize_data_offset_test 00:07:36.156 ************************************ 00:07:36.156 11:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:36.156 11:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.156 11:19:18 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:36.156 11:19:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:36.156 11:19:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.156 11:19:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.156 ************************************ 00:07:36.156 START TEST raid0_resize_superblock_test 00:07:36.156 ************************************ 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:36.156 Process raid pid: 59910 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59910 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59910' 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59910 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59910 ']' 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.156 11:19:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.415 [2024-11-15 11:19:19.109759] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:36.415 [2024-11-15 11:19:19.110213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.415 [2024-11-15 11:19:19.311972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.674 [2024-11-15 11:19:19.487424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.932 [2024-11-15 11:19:19.718567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.932 [2024-11-15 11:19:19.718637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.190 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.190 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:37.190 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:37.190 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.190 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 malloc0 00:07:37.756 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.756 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:37.756 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.756 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 [2024-11-15 11:19:20.687808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:37.756 [2024-11-15 11:19:20.687879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.756 [2024-11-15 11:19:20.687908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:37.756 [2024-11-15 11:19:20.687924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.756 [2024-11-15 11:19:20.690974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.756 [2024-11-15 11:19:20.691172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:37.756 pt0 00:07:37.756 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.756 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:37.756 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.756 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.014 cc9b4c1f-5f79-4f8b-ad13-4c8c012c6dcf 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.014 397ccc3a-e6d8-4bec-b42e-77fbf569a337 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.014 c3cef158-9a1f-4caf-b743-05bdbbc8bd67 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.014 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.014 [2024-11-15 11:19:20.880091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 397ccc3a-e6d8-4bec-b42e-77fbf569a337 is claimed 00:07:38.014 [2024-11-15 11:19:20.880210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c3cef158-9a1f-4caf-b743-05bdbbc8bd67 is claimed 00:07:38.014 [2024-11-15 11:19:20.880436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:38.014 [2024-11-15 11:19:20.880460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:38.015 [2024-11-15 11:19:20.880932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:38.015 [2024-11-15 11:19:20.881304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:38.015 [2024-11-15 11:19:20.881321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:38.015 [2024-11-15 11:19:20.881505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.015 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 [2024-11-15 11:19:20.996471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:38.289 11:19:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 [2024-11-15 11:19:21.044662] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:38.289 [2024-11-15 11:19:21.044877] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '397ccc3a-e6d8-4bec-b42e-77fbf569a337' was resized: old size 131072, new size 204800 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 [2024-11-15 11:19:21.052314] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:38.289 [2024-11-15 11:19:21.052568] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c3cef158-9a1f-4caf-b743-05bdbbc8bd67' was resized: old size 131072, new size 204800 00:07:38.289 [2024-11-15 11:19:21.052655] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 [2024-11-15 11:19:21.172627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 [2024-11-15 11:19:21.216273] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:38.289 [2024-11-15 11:19:21.216381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:38.289 [2024-11-15 11:19:21.216406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.289 [2024-11-15 11:19:21.216428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:38.289 [2024-11-15 11:19:21.216637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.289 [2024-11-15 11:19:21.216687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.289 [2024-11-15 11:19:21.216707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.289 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 [2024-11-15 11:19:21.224147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:38.289 [2024-11-15 11:19:21.224236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.289 [2024-11-15 11:19:21.224275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:38.289 [2024-11-15 11:19:21.224293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.576 [2024-11-15 11:19:21.227495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.576 [2024-11-15 11:19:21.227574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:38.576 pt0 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.576 [2024-11-15 11:19:21.230112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 397ccc3a-e6d8-4bec-b42e-77fbf569a337 00:07:38.576 [2024-11-15 11:19:21.230203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 397ccc3a-e6d8-4bec-b42e-77fbf569a337 is claimed 00:07:38.576 [2024-11-15 11:19:21.230382] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c3cef158-9a1f-4caf-b743-05bdbbc8bd67 00:07:38.576 [2024-11-15 11:19:21.230588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c3cef158-9a1f-4caf-b743-05bdbbc8bd67 is claimed 00:07:38.576 [2024-11-15 11:19:21.230793] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c3cef158-9a1f-4caf-b743-05bdbbc8bd67 (2) smaller than existing raid bdev Raid (3) 00:07:38.576 [2024-11-15 11:19:21.230832] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 397ccc3a-e6d8-4bec-b42e-77fbf569a337: File exists 00:07:38.576 [2024-11-15 11:19:21.230893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:38.576 [2024-11-15 11:19:21.230920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:38.576 [2024-11-15 11:19:21.231271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:38.576 [2024-11-15 11:19:21.231525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:38.576 [2024-11-15 11:19:21.231557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:38.576 [2024-11-15 11:19:21.231770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.576 [2024-11-15 11:19:21.244485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59910 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59910 ']' 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59910 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59910 00:07:38.576 killing process with pid 59910 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59910' 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59910 00:07:38.576 [2024-11-15 11:19:21.321757] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.576 11:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59910 00:07:38.576 [2024-11-15 11:19:21.321866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.576 [2024-11-15 11:19:21.321939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.576 [2024-11-15 11:19:21.321955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:39.952 [2024-11-15 11:19:22.630632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.888 11:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:40.888 00:07:40.888 real 0m4.664s 00:07:40.888 user 0m4.885s 00:07:40.888 sys 0m0.776s 00:07:40.888 11:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.888 ************************************ 00:07:40.888 END TEST raid0_resize_superblock_test 00:07:40.888 ************************************ 00:07:40.888 11:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 11:19:23 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:40.888 11:19:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:40.888 11:19:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.888 11:19:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 ************************************ 00:07:40.888 START TEST raid1_resize_superblock_test 00:07:40.888 ************************************ 00:07:40.888 11:19:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:07:40.888 11:19:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:40.888 Process raid pid: 60014 00:07:40.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.888 11:19:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60014 00:07:40.888 11:19:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.888 11:19:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60014' 00:07:40.888 11:19:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60014 00:07:40.888 11:19:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60014 ']' 00:07:40.889 11:19:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.889 11:19:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.889 11:19:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.889 11:19:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.889 11:19:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.889 [2024-11-15 11:19:23.825415] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:40.889 [2024-11-15 11:19:23.825972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.148 [2024-11-15 11:19:24.014612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.407 [2024-11-15 11:19:24.143009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.407 [2024-11-15 11:19:24.354548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.407 [2024-11-15 11:19:24.354800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.974 11:19:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.974 11:19:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:41.974 11:19:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:41.974 11:19:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.974 11:19:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.542 malloc0 00:07:42.542 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.542 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:42.542 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.542 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.542 [2024-11-15 11:19:25.414289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:42.542 [2024-11-15 11:19:25.414380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.542 [2024-11-15 11:19:25.414414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:42.542 [2024-11-15 11:19:25.414433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.542 [2024-11-15 11:19:25.417782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.542 [2024-11-15 11:19:25.417844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:42.542 pt0 00:07:42.542 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.542 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:42.542 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.542 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 bc1e8687-103c-4db1-9d74-b89c32ac8b52 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 102c3d39-d9fd-4e8f-92e1-824d3d3668c3 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 8a2530a5-fd59-4468-9fdc-3c912151a846 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 [2024-11-15 11:19:25.626795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 102c3d39-d9fd-4e8f-92e1-824d3d3668c3 is claimed 00:07:42.802 [2024-11-15 11:19:25.627125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8a2530a5-fd59-4468-9fdc-3c912151a846 is claimed 00:07:42.802 [2024-11-15 11:19:25.627387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:42.802 [2024-11-15 11:19:25.627425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:42.802 [2024-11-15 11:19:25.627793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.802 [2024-11-15 11:19:25.628077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:42.802 [2024-11-15 11:19:25.628093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:42.802 [2024-11-15 11:19:25.628331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.802 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.803 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-11-15 11:19:25.751216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-11-15 11:19:25.803245] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.063 [2024-11-15 11:19:25.803297] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '102c3d39-d9fd-4e8f-92e1-824d3d3668c3' was resized: old size 131072, new size 204800 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-11-15 11:19:25.810923] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.063 [2024-11-15 11:19:25.811102] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8a2530a5-fd59-4468-9fdc-3c912151a846' was resized: old size 131072, new size 204800 00:07:43.063 [2024-11-15 11:19:25.811170] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-11-15 11:19:25.935164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-11-15 11:19:25.978994] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:43.063 [2024-11-15 11:19:25.979293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:43.063 [2024-11-15 11:19:25.979345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:43.063 [2024-11-15 11:19:25.979592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.063 [2024-11-15 11:19:25.979905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.063 [2024-11-15 11:19:25.979994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.063 [2024-11-15 11:19:25.980015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-11-15 11:19:25.986829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:43.063 [2024-11-15 11:19:25.986901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.063 [2024-11-15 11:19:25.986928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:43.063 [2024-11-15 11:19:25.986945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.063 [2024-11-15 11:19:25.990113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.063 [2024-11-15 11:19:25.990184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:43.063 pt0 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-11-15 11:19:25.992663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 102c3d39-d9fd-4e8f-92e1-824d3d3668c3 00:07:43.063 [2024-11-15 11:19:25.992905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 102c3d39-d9fd-4e8f-92e1-824d3d3668c3 is claimed 00:07:43.063 [2024-11-15 11:19:25.993057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8a2530a5-fd59-4468-9fdc-3c912151a846 00:07:43.063 [2024-11-15 11:19:25.993093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8a2530a5-fd59-4468-9fdc-3c912151a846 is claimed 00:07:43.063 [2024-11-15 11:19:25.993317] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8a2530a5-fd59-4468-9fdc-3c912151a846 (2) smaller than existing raid bdev Raid (3) 00:07:43.063 [2024-11-15 11:19:25.993363] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 102c3d39-d9fd-4e8f-92e1-824d3d3668c3: File exists 00:07:43.063 [2024-11-15 11:19:25.993434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:43.063 [2024-11-15 11:19:25.993454] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:43.063 [2024-11-15 11:19:25.993801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:43.063 [2024-11-15 11:19:25.994092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:43.063 [2024-11-15 11:19:25.994114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:43.063 [2024-11-15 11:19:25.994330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:43.063 11:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.063 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.063 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:43.063 11:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:43.063 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-11-15 11:19:26.007279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.322 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:43.322 11:19:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60014 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60014 ']' 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60014 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60014 00:07:43.322 killing process with pid 60014 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:43.322 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60014' 00:07:43.323 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60014 00:07:43.323 [2024-11-15 11:19:26.088623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.323 11:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60014 00:07:43.323 [2024-11-15 11:19:26.088732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.323 [2024-11-15 11:19:26.088807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.323 [2024-11-15 11:19:26.088821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:44.699 [2024-11-15 11:19:27.325578] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.634 11:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:45.634 00:07:45.634 real 0m4.634s 00:07:45.634 user 0m4.864s 00:07:45.634 sys 0m0.766s 00:07:45.634 ************************************ 00:07:45.634 END TEST raid1_resize_superblock_test 00:07:45.634 ************************************ 00:07:45.634 11:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.634 11:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.634 11:19:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:45.634 11:19:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:45.634 11:19:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:45.634 11:19:28 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:45.634 11:19:28 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:45.634 11:19:28 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:45.634 11:19:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:45.634 11:19:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.634 11:19:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.634 ************************************ 00:07:45.634 START TEST raid_function_test_raid0 00:07:45.634 ************************************ 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60117 00:07:45.634 Process raid pid: 60117 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60117' 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60117 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60117 ']' 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:45.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:45.634 11:19:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:45.634 [2024-11-15 11:19:28.537234] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:45.634 [2024-11-15 11:19:28.537704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.969 [2024-11-15 11:19:28.731349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.969 [2024-11-15 11:19:28.905556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.227 [2024-11-15 11:19:29.106912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.227 [2024-11-15 11:19:29.107280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:46.792 Base_1 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:46.792 Base_2 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:46.792 [2024-11-15 11:19:29.598092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:46.792 [2024-11-15 11:19:29.600701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:46.792 [2024-11-15 11:19:29.600949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:46.792 [2024-11-15 11:19:29.600977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:46.792 [2024-11-15 11:19:29.601390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.792 [2024-11-15 11:19:29.601608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:46.792 [2024-11-15 11:19:29.601623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:46.792 [2024-11-15 11:19:29.601854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:46.792 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:47.051 [2024-11-15 11:19:29.930304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:47.051 /dev/nbd0 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.051 1+0 records in 00:07:47.051 1+0 records out 00:07:47.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335283 s, 12.2 MB/s 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:47.051 11:19:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:47.618 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:47.618 { 00:07:47.618 "nbd_device": "/dev/nbd0", 00:07:47.618 "bdev_name": "raid" 00:07:47.618 } 00:07:47.618 ]' 00:07:47.618 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:47.618 { 00:07:47.618 "nbd_device": "/dev/nbd0", 00:07:47.618 "bdev_name": "raid" 00:07:47.618 } 00:07:47.618 ]' 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:47.619 4096+0 records in 00:07:47.619 4096+0 records out 00:07:47.619 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0319611 s, 65.6 MB/s 00:07:47.619 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:47.878 4096+0 records in 00:07:47.878 4096+0 records out 00:07:47.878 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.338201 s, 6.2 MB/s 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:47.878 128+0 records in 00:07:47.878 128+0 records out 00:07:47.878 65536 bytes (66 kB, 64 KiB) copied, 0.0011598 s, 56.5 MB/s 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:47.878 2035+0 records in 00:07:47.878 2035+0 records out 00:07:47.878 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00990232 s, 105 MB/s 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:47.878 456+0 records in 00:07:47.878 456+0 records out 00:07:47.878 233472 bytes (233 kB, 228 KiB) copied, 0.00232735 s, 100 MB/s 00:07:47.878 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:48.137 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:48.137 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:48.137 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:48.137 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:48.137 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:48.137 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:48.137 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:48.138 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:48.138 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.138 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:48.138 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.138 11:19:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:48.397 [2024-11-15 11:19:31.165800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:48.397 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60117 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60117 ']' 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60117 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60117 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:48.656 killing process with pid 60117 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60117' 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60117 00:07:48.656 [2024-11-15 11:19:31.599688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.656 11:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60117 00:07:48.656 [2024-11-15 11:19:31.599830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.656 [2024-11-15 11:19:31.599902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.656 [2024-11-15 11:19:31.599933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:48.915 [2024-11-15 11:19:31.789448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.292 11:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:50.292 00:07:50.292 real 0m4.447s 00:07:50.292 user 0m5.409s 00:07:50.292 sys 0m1.112s 00:07:50.292 11:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.292 ************************************ 00:07:50.292 END TEST raid_function_test_raid0 00:07:50.292 11:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:50.292 ************************************ 00:07:50.292 11:19:32 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:50.292 11:19:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:50.292 11:19:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.292 11:19:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.293 ************************************ 00:07:50.293 START TEST raid_function_test_concat 00:07:50.293 ************************************ 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60251 00:07:50.293 Process raid pid: 60251 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60251' 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60251 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60251 ']' 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.293 11:19:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:50.293 [2024-11-15 11:19:33.044100] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:50.293 [2024-11-15 11:19:33.044333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.293 [2024-11-15 11:19:33.235082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.551 [2024-11-15 11:19:33.370444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.810 [2024-11-15 11:19:33.586913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.810 [2024-11-15 11:19:33.587026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.068 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.068 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:07:51.068 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:51.068 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.068 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:51.328 Base_1 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:51.328 Base_2 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:51.328 [2024-11-15 11:19:34.107309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:51.328 [2024-11-15 11:19:34.110139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:51.328 [2024-11-15 11:19:34.110252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:51.328 [2024-11-15 11:19:34.110274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:51.328 [2024-11-15 11:19:34.110609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:51.328 [2024-11-15 11:19:34.110909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:51.328 [2024-11-15 11:19:34.110936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:51.328 [2024-11-15 11:19:34.111192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:51.328 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:51.587 [2024-11-15 11:19:34.391444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:51.587 /dev/nbd0 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.587 1+0 records in 00:07:51.587 1+0 records out 00:07:51.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244246 s, 16.8 MB/s 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:51.587 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:51.846 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.846 { 00:07:51.846 "nbd_device": "/dev/nbd0", 00:07:51.846 "bdev_name": "raid" 00:07:51.846 } 00:07:51.846 ]' 00:07:51.846 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.846 { 00:07:51.846 "nbd_device": "/dev/nbd0", 00:07:51.846 "bdev_name": "raid" 00:07:51.846 } 00:07:51.846 ]' 00:07:51.846 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:52.104 4096+0 records in 00:07:52.104 4096+0 records out 00:07:52.104 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0253717 s, 82.7 MB/s 00:07:52.104 11:19:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:52.362 4096+0 records in 00:07:52.362 4096+0 records out 00:07:52.362 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.325453 s, 6.4 MB/s 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:52.362 128+0 records in 00:07:52.362 128+0 records out 00:07:52.362 65536 bytes (66 kB, 64 KiB) copied, 0.00107804 s, 60.8 MB/s 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:52.362 2035+0 records in 00:07:52.362 2035+0 records out 00:07:52.362 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0102697 s, 101 MB/s 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:52.362 456+0 records in 00:07:52.362 456+0 records out 00:07:52.362 233472 bytes (233 kB, 228 KiB) copied, 0.00285929 s, 81.7 MB/s 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:52.362 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.363 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.929 [2024-11-15 11:19:35.628428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:52.929 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:53.188 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:53.189 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:53.189 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.189 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:53.189 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:53.189 11:19:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60251 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60251 ']' 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60251 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60251 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:53.189 killing process with pid 60251 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60251' 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60251 00:07:53.189 [2024-11-15 11:19:36.038495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.189 11:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60251 00:07:53.189 [2024-11-15 11:19:36.038660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.189 [2024-11-15 11:19:36.038748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.189 [2024-11-15 11:19:36.038768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:53.448 [2024-11-15 11:19:36.234855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.383 11:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:54.383 00:07:54.383 real 0m4.367s 00:07:54.383 user 0m5.301s 00:07:54.383 sys 0m1.084s 00:07:54.383 11:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.383 11:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:54.383 ************************************ 00:07:54.383 END TEST raid_function_test_concat 00:07:54.383 ************************************ 00:07:54.642 11:19:37 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:54.642 11:19:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:54.642 11:19:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.642 11:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.642 ************************************ 00:07:54.642 START TEST raid0_resize_test 00:07:54.642 ************************************ 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60380 00:07:54.642 Process raid pid: 60380 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60380' 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60380 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60380 ']' 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:54.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:54.642 11:19:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.642 [2024-11-15 11:19:37.470702] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:54.642 [2024-11-15 11:19:37.470924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.901 [2024-11-15 11:19:37.655722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.901 [2024-11-15 11:19:37.792423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.159 [2024-11-15 11:19:38.006278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.159 [2024-11-15 11:19:38.006398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.728 Base_1 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.728 Base_2 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.728 [2024-11-15 11:19:38.436614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:55.728 [2024-11-15 11:19:38.439403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:55.728 [2024-11-15 11:19:38.439494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:55.728 [2024-11-15 11:19:38.439528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:55.728 [2024-11-15 11:19:38.439841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:55.728 [2024-11-15 11:19:38.440016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:55.728 [2024-11-15 11:19:38.440040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:55.728 [2024-11-15 11:19:38.440227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.728 [2024-11-15 11:19:38.444615] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:55.728 [2024-11-15 11:19:38.444665] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:55.728 true 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.728 [2024-11-15 11:19:38.456832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.728 [2024-11-15 11:19:38.512665] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:55.728 [2024-11-15 11:19:38.512712] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:55.728 [2024-11-15 11:19:38.512764] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:55.728 true 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.728 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:55.729 [2024-11-15 11:19:38.524785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60380 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60380 ']' 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60380 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60380 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.729 killing process with pid 60380 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60380' 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60380 00:07:55.729 [2024-11-15 11:19:38.610233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.729 11:19:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60380 00:07:55.729 [2024-11-15 11:19:38.610366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.729 [2024-11-15 11:19:38.610450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.729 [2024-11-15 11:19:38.610465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:55.729 [2024-11-15 11:19:38.627724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.107 11:19:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:57.107 00:07:57.107 real 0m2.350s 00:07:57.107 user 0m2.560s 00:07:57.107 sys 0m0.409s 00:07:57.107 11:19:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.107 11:19:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.107 ************************************ 00:07:57.107 END TEST raid0_resize_test 00:07:57.107 ************************************ 00:07:57.107 11:19:39 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:57.107 11:19:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:57.107 11:19:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.107 11:19:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 ************************************ 00:07:57.108 START TEST raid1_resize_test 00:07:57.108 ************************************ 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60442 00:07:57.108 Process raid pid: 60442 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60442' 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60442 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60442 ']' 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.108 11:19:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 [2024-11-15 11:19:39.871189] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:57.108 [2024-11-15 11:19:39.871385] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.366 [2024-11-15 11:19:40.063751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.366 [2024-11-15 11:19:40.205166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.626 [2024-11-15 11:19:40.402017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.626 [2024-11-15 11:19:40.402095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 Base_1 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 Base_2 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 [2024-11-15 11:19:40.824162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:57.886 [2024-11-15 11:19:40.826723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:57.886 [2024-11-15 11:19:40.826820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:57.886 [2024-11-15 11:19:40.826839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:57.886 [2024-11-15 11:19:40.827204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:57.886 [2024-11-15 11:19:40.827395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:57.886 [2024-11-15 11:19:40.827430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:57.886 [2024-11-15 11:19:40.827626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.886 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 [2024-11-15 11:19:40.832173] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:57.886 [2024-11-15 11:19:40.832276] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:58.157 true 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.157 [2024-11-15 11:19:40.844348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.157 [2024-11-15 11:19:40.896111] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:58.157 [2024-11-15 11:19:40.896138] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:58.157 [2024-11-15 11:19:40.896229] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:58.157 true 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:58.157 [2024-11-15 11:19:40.908397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60442 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60442 ']' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60442 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60442 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:58.157 killing process with pid 60442 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60442' 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60442 00:07:58.157 [2024-11-15 11:19:40.986995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.157 11:19:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60442 00:07:58.157 [2024-11-15 11:19:40.987119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.157 [2024-11-15 11:19:40.987825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.157 [2024-11-15 11:19:40.987873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:58.157 [2024-11-15 11:19:41.001964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.542 11:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:59.542 00:07:59.542 real 0m2.303s 00:07:59.542 user 0m2.468s 00:07:59.542 sys 0m0.425s 00:07:59.542 11:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.542 11:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.542 ************************************ 00:07:59.542 END TEST raid1_resize_test 00:07:59.542 ************************************ 00:07:59.542 11:19:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:59.542 11:19:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:59.542 11:19:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:59.542 11:19:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:59.542 11:19:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.542 11:19:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.542 ************************************ 00:07:59.542 START TEST raid_state_function_test 00:07:59.542 ************************************ 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60499 00:07:59.542 Process raid pid: 60499 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60499' 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60499 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60499 ']' 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.542 11:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.542 [2024-11-15 11:19:42.235844] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:59.542 [2024-11-15 11:19:42.236091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.542 [2024-11-15 11:19:42.422720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.800 [2024-11-15 11:19:42.555389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.058 [2024-11-15 11:19:42.761870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.058 [2024-11-15 11:19:42.761945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.316 [2024-11-15 11:19:43.227238] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.316 [2024-11-15 11:19:43.227354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.316 [2024-11-15 11:19:43.227372] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.316 [2024-11-15 11:19:43.227389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.316 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.574 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.574 "name": "Existed_Raid", 00:08:00.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.574 "strip_size_kb": 64, 00:08:00.574 "state": "configuring", 00:08:00.574 "raid_level": "raid0", 00:08:00.574 "superblock": false, 00:08:00.574 "num_base_bdevs": 2, 00:08:00.574 "num_base_bdevs_discovered": 0, 00:08:00.574 "num_base_bdevs_operational": 2, 00:08:00.574 "base_bdevs_list": [ 00:08:00.574 { 00:08:00.574 "name": "BaseBdev1", 00:08:00.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.574 "is_configured": false, 00:08:00.574 "data_offset": 0, 00:08:00.574 "data_size": 0 00:08:00.574 }, 00:08:00.574 { 00:08:00.574 "name": "BaseBdev2", 00:08:00.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.574 "is_configured": false, 00:08:00.574 "data_offset": 0, 00:08:00.574 "data_size": 0 00:08:00.574 } 00:08:00.574 ] 00:08:00.574 }' 00:08:00.574 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.574 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.832 [2024-11-15 11:19:43.747369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.832 [2024-11-15 11:19:43.747418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.832 [2024-11-15 11:19:43.755322] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.832 [2024-11-15 11:19:43.755388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.832 [2024-11-15 11:19:43.755404] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.832 [2024-11-15 11:19:43.755423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.832 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.090 [2024-11-15 11:19:43.800888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.090 BaseBdev1 00:08:01.090 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.091 [ 00:08:01.091 { 00:08:01.091 "name": "BaseBdev1", 00:08:01.091 "aliases": [ 00:08:01.091 "c0b13f11-56cb-47af-96f5-66315563333e" 00:08:01.091 ], 00:08:01.091 "product_name": "Malloc disk", 00:08:01.091 "block_size": 512, 00:08:01.091 "num_blocks": 65536, 00:08:01.091 "uuid": "c0b13f11-56cb-47af-96f5-66315563333e", 00:08:01.091 "assigned_rate_limits": { 00:08:01.091 "rw_ios_per_sec": 0, 00:08:01.091 "rw_mbytes_per_sec": 0, 00:08:01.091 "r_mbytes_per_sec": 0, 00:08:01.091 "w_mbytes_per_sec": 0 00:08:01.091 }, 00:08:01.091 "claimed": true, 00:08:01.091 "claim_type": "exclusive_write", 00:08:01.091 "zoned": false, 00:08:01.091 "supported_io_types": { 00:08:01.091 "read": true, 00:08:01.091 "write": true, 00:08:01.091 "unmap": true, 00:08:01.091 "flush": true, 00:08:01.091 "reset": true, 00:08:01.091 "nvme_admin": false, 00:08:01.091 "nvme_io": false, 00:08:01.091 "nvme_io_md": false, 00:08:01.091 "write_zeroes": true, 00:08:01.091 "zcopy": true, 00:08:01.091 "get_zone_info": false, 00:08:01.091 "zone_management": false, 00:08:01.091 "zone_append": false, 00:08:01.091 "compare": false, 00:08:01.091 "compare_and_write": false, 00:08:01.091 "abort": true, 00:08:01.091 "seek_hole": false, 00:08:01.091 "seek_data": false, 00:08:01.091 "copy": true, 00:08:01.091 "nvme_iov_md": false 00:08:01.091 }, 00:08:01.091 "memory_domains": [ 00:08:01.091 { 00:08:01.091 "dma_device_id": "system", 00:08:01.091 "dma_device_type": 1 00:08:01.091 }, 00:08:01.091 { 00:08:01.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.091 "dma_device_type": 2 00:08:01.091 } 00:08:01.091 ], 00:08:01.091 "driver_specific": {} 00:08:01.091 } 00:08:01.091 ] 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.091 "name": "Existed_Raid", 00:08:01.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.091 "strip_size_kb": 64, 00:08:01.091 "state": "configuring", 00:08:01.091 "raid_level": "raid0", 00:08:01.091 "superblock": false, 00:08:01.091 "num_base_bdevs": 2, 00:08:01.091 "num_base_bdevs_discovered": 1, 00:08:01.091 "num_base_bdevs_operational": 2, 00:08:01.091 "base_bdevs_list": [ 00:08:01.091 { 00:08:01.091 "name": "BaseBdev1", 00:08:01.091 "uuid": "c0b13f11-56cb-47af-96f5-66315563333e", 00:08:01.091 "is_configured": true, 00:08:01.091 "data_offset": 0, 00:08:01.091 "data_size": 65536 00:08:01.091 }, 00:08:01.091 { 00:08:01.091 "name": "BaseBdev2", 00:08:01.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.091 "is_configured": false, 00:08:01.091 "data_offset": 0, 00:08:01.091 "data_size": 0 00:08:01.091 } 00:08:01.091 ] 00:08:01.091 }' 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.091 11:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.657 [2024-11-15 11:19:44.373132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.657 [2024-11-15 11:19:44.373268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.657 [2024-11-15 11:19:44.385135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.657 [2024-11-15 11:19:44.387849] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.657 [2024-11-15 11:19:44.387914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.657 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.657 "name": "Existed_Raid", 00:08:01.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.658 "strip_size_kb": 64, 00:08:01.658 "state": "configuring", 00:08:01.658 "raid_level": "raid0", 00:08:01.658 "superblock": false, 00:08:01.658 "num_base_bdevs": 2, 00:08:01.658 "num_base_bdevs_discovered": 1, 00:08:01.658 "num_base_bdevs_operational": 2, 00:08:01.658 "base_bdevs_list": [ 00:08:01.658 { 00:08:01.658 "name": "BaseBdev1", 00:08:01.658 "uuid": "c0b13f11-56cb-47af-96f5-66315563333e", 00:08:01.658 "is_configured": true, 00:08:01.658 "data_offset": 0, 00:08:01.658 "data_size": 65536 00:08:01.658 }, 00:08:01.658 { 00:08:01.658 "name": "BaseBdev2", 00:08:01.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.658 "is_configured": false, 00:08:01.658 "data_offset": 0, 00:08:01.658 "data_size": 0 00:08:01.658 } 00:08:01.658 ] 00:08:01.658 }' 00:08:01.658 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.658 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.222 [2024-11-15 11:19:44.955541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.222 [2024-11-15 11:19:44.955611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.222 [2024-11-15 11:19:44.955624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:02.222 [2024-11-15 11:19:44.955913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.222 [2024-11-15 11:19:44.956106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.222 [2024-11-15 11:19:44.956125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:02.222 [2024-11-15 11:19:44.956484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.222 BaseBdev2 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.222 [ 00:08:02.222 { 00:08:02.222 "name": "BaseBdev2", 00:08:02.222 "aliases": [ 00:08:02.222 "0709bdc2-82e0-42fb-b782-d4258e37c4e3" 00:08:02.222 ], 00:08:02.222 "product_name": "Malloc disk", 00:08:02.222 "block_size": 512, 00:08:02.222 "num_blocks": 65536, 00:08:02.222 "uuid": "0709bdc2-82e0-42fb-b782-d4258e37c4e3", 00:08:02.222 "assigned_rate_limits": { 00:08:02.222 "rw_ios_per_sec": 0, 00:08:02.222 "rw_mbytes_per_sec": 0, 00:08:02.222 "r_mbytes_per_sec": 0, 00:08:02.222 "w_mbytes_per_sec": 0 00:08:02.222 }, 00:08:02.222 "claimed": true, 00:08:02.222 "claim_type": "exclusive_write", 00:08:02.222 "zoned": false, 00:08:02.222 "supported_io_types": { 00:08:02.222 "read": true, 00:08:02.222 "write": true, 00:08:02.222 "unmap": true, 00:08:02.222 "flush": true, 00:08:02.222 "reset": true, 00:08:02.222 "nvme_admin": false, 00:08:02.222 "nvme_io": false, 00:08:02.222 "nvme_io_md": false, 00:08:02.222 "write_zeroes": true, 00:08:02.222 "zcopy": true, 00:08:02.222 "get_zone_info": false, 00:08:02.222 "zone_management": false, 00:08:02.222 "zone_append": false, 00:08:02.222 "compare": false, 00:08:02.222 "compare_and_write": false, 00:08:02.222 "abort": true, 00:08:02.222 "seek_hole": false, 00:08:02.222 "seek_data": false, 00:08:02.222 "copy": true, 00:08:02.222 "nvme_iov_md": false 00:08:02.222 }, 00:08:02.222 "memory_domains": [ 00:08:02.222 { 00:08:02.222 "dma_device_id": "system", 00:08:02.222 "dma_device_type": 1 00:08:02.222 }, 00:08:02.222 { 00:08:02.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.222 "dma_device_type": 2 00:08:02.222 } 00:08:02.222 ], 00:08:02.222 "driver_specific": {} 00:08:02.222 } 00:08:02.222 ] 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.222 11:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.222 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.222 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.222 "name": "Existed_Raid", 00:08:02.222 "uuid": "f3921415-81e7-46ba-850c-3ac655e74d6e", 00:08:02.222 "strip_size_kb": 64, 00:08:02.222 "state": "online", 00:08:02.222 "raid_level": "raid0", 00:08:02.222 "superblock": false, 00:08:02.222 "num_base_bdevs": 2, 00:08:02.222 "num_base_bdevs_discovered": 2, 00:08:02.222 "num_base_bdevs_operational": 2, 00:08:02.222 "base_bdevs_list": [ 00:08:02.222 { 00:08:02.222 "name": "BaseBdev1", 00:08:02.222 "uuid": "c0b13f11-56cb-47af-96f5-66315563333e", 00:08:02.222 "is_configured": true, 00:08:02.222 "data_offset": 0, 00:08:02.222 "data_size": 65536 00:08:02.222 }, 00:08:02.222 { 00:08:02.222 "name": "BaseBdev2", 00:08:02.222 "uuid": "0709bdc2-82e0-42fb-b782-d4258e37c4e3", 00:08:02.222 "is_configured": true, 00:08:02.222 "data_offset": 0, 00:08:02.222 "data_size": 65536 00:08:02.222 } 00:08:02.222 ] 00:08:02.222 }' 00:08:02.222 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.222 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.788 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.788 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.788 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.789 [2024-11-15 11:19:45.508331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.789 "name": "Existed_Raid", 00:08:02.789 "aliases": [ 00:08:02.789 "f3921415-81e7-46ba-850c-3ac655e74d6e" 00:08:02.789 ], 00:08:02.789 "product_name": "Raid Volume", 00:08:02.789 "block_size": 512, 00:08:02.789 "num_blocks": 131072, 00:08:02.789 "uuid": "f3921415-81e7-46ba-850c-3ac655e74d6e", 00:08:02.789 "assigned_rate_limits": { 00:08:02.789 "rw_ios_per_sec": 0, 00:08:02.789 "rw_mbytes_per_sec": 0, 00:08:02.789 "r_mbytes_per_sec": 0, 00:08:02.789 "w_mbytes_per_sec": 0 00:08:02.789 }, 00:08:02.789 "claimed": false, 00:08:02.789 "zoned": false, 00:08:02.789 "supported_io_types": { 00:08:02.789 "read": true, 00:08:02.789 "write": true, 00:08:02.789 "unmap": true, 00:08:02.789 "flush": true, 00:08:02.789 "reset": true, 00:08:02.789 "nvme_admin": false, 00:08:02.789 "nvme_io": false, 00:08:02.789 "nvme_io_md": false, 00:08:02.789 "write_zeroes": true, 00:08:02.789 "zcopy": false, 00:08:02.789 "get_zone_info": false, 00:08:02.789 "zone_management": false, 00:08:02.789 "zone_append": false, 00:08:02.789 "compare": false, 00:08:02.789 "compare_and_write": false, 00:08:02.789 "abort": false, 00:08:02.789 "seek_hole": false, 00:08:02.789 "seek_data": false, 00:08:02.789 "copy": false, 00:08:02.789 "nvme_iov_md": false 00:08:02.789 }, 00:08:02.789 "memory_domains": [ 00:08:02.789 { 00:08:02.789 "dma_device_id": "system", 00:08:02.789 "dma_device_type": 1 00:08:02.789 }, 00:08:02.789 { 00:08:02.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.789 "dma_device_type": 2 00:08:02.789 }, 00:08:02.789 { 00:08:02.789 "dma_device_id": "system", 00:08:02.789 "dma_device_type": 1 00:08:02.789 }, 00:08:02.789 { 00:08:02.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.789 "dma_device_type": 2 00:08:02.789 } 00:08:02.789 ], 00:08:02.789 "driver_specific": { 00:08:02.789 "raid": { 00:08:02.789 "uuid": "f3921415-81e7-46ba-850c-3ac655e74d6e", 00:08:02.789 "strip_size_kb": 64, 00:08:02.789 "state": "online", 00:08:02.789 "raid_level": "raid0", 00:08:02.789 "superblock": false, 00:08:02.789 "num_base_bdevs": 2, 00:08:02.789 "num_base_bdevs_discovered": 2, 00:08:02.789 "num_base_bdevs_operational": 2, 00:08:02.789 "base_bdevs_list": [ 00:08:02.789 { 00:08:02.789 "name": "BaseBdev1", 00:08:02.789 "uuid": "c0b13f11-56cb-47af-96f5-66315563333e", 00:08:02.789 "is_configured": true, 00:08:02.789 "data_offset": 0, 00:08:02.789 "data_size": 65536 00:08:02.789 }, 00:08:02.789 { 00:08:02.789 "name": "BaseBdev2", 00:08:02.789 "uuid": "0709bdc2-82e0-42fb-b782-d4258e37c4e3", 00:08:02.789 "is_configured": true, 00:08:02.789 "data_offset": 0, 00:08:02.789 "data_size": 65536 00:08:02.789 } 00:08:02.789 ] 00:08:02.789 } 00:08:02.789 } 00:08:02.789 }' 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:02.789 BaseBdev2' 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.789 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.048 [2024-11-15 11:19:45.771975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.048 [2024-11-15 11:19:45.772018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.048 [2024-11-15 11:19:45.772085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.048 "name": "Existed_Raid", 00:08:03.048 "uuid": "f3921415-81e7-46ba-850c-3ac655e74d6e", 00:08:03.048 "strip_size_kb": 64, 00:08:03.048 "state": "offline", 00:08:03.048 "raid_level": "raid0", 00:08:03.048 "superblock": false, 00:08:03.048 "num_base_bdevs": 2, 00:08:03.048 "num_base_bdevs_discovered": 1, 00:08:03.048 "num_base_bdevs_operational": 1, 00:08:03.048 "base_bdevs_list": [ 00:08:03.048 { 00:08:03.048 "name": null, 00:08:03.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.048 "is_configured": false, 00:08:03.048 "data_offset": 0, 00:08:03.048 "data_size": 65536 00:08:03.048 }, 00:08:03.048 { 00:08:03.048 "name": "BaseBdev2", 00:08:03.048 "uuid": "0709bdc2-82e0-42fb-b782-d4258e37c4e3", 00:08:03.048 "is_configured": true, 00:08:03.048 "data_offset": 0, 00:08:03.048 "data_size": 65536 00:08:03.048 } 00:08:03.048 ] 00:08:03.048 }' 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.048 11:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.615 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 [2024-11-15 11:19:46.480714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.615 [2024-11-15 11:19:46.480790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:03.876 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.876 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.876 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.876 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60499 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60499 ']' 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60499 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60499 00:08:03.877 killing process with pid 60499 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60499' 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60499 00:08:03.877 [2024-11-15 11:19:46.661117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.877 11:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60499 00:08:03.877 [2024-11-15 11:19:46.676480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.821 11:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.821 00:08:04.821 real 0m5.574s 00:08:04.821 user 0m8.446s 00:08:04.821 sys 0m0.847s 00:08:04.821 11:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.821 11:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.821 ************************************ 00:08:04.821 END TEST raid_state_function_test 00:08:04.821 ************************************ 00:08:04.821 11:19:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:04.821 11:19:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:04.821 11:19:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.821 11:19:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.821 ************************************ 00:08:04.822 START TEST raid_state_function_test_sb 00:08:04.822 ************************************ 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.822 Process raid pid: 60757 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60757 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60757' 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60757 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60757 ']' 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.822 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.080 [2024-11-15 11:19:47.847152] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:05.080 [2024-11-15 11:19:47.847687] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.080 [2024-11-15 11:19:48.021007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.339 [2024-11-15 11:19:48.169356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.597 [2024-11-15 11:19:48.379721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.597 [2024-11-15 11:19:48.379765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.163 [2024-11-15 11:19:48.884829] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.163 [2024-11-15 11:19:48.884911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.163 [2024-11-15 11:19:48.884943] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.163 [2024-11-15 11:19:48.884959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.164 "name": "Existed_Raid", 00:08:06.164 "uuid": "b54c8cfc-cf24-4ed6-a495-987d50ce23a8", 00:08:06.164 "strip_size_kb": 64, 00:08:06.164 "state": "configuring", 00:08:06.164 "raid_level": "raid0", 00:08:06.164 "superblock": true, 00:08:06.164 "num_base_bdevs": 2, 00:08:06.164 "num_base_bdevs_discovered": 0, 00:08:06.164 "num_base_bdevs_operational": 2, 00:08:06.164 "base_bdevs_list": [ 00:08:06.164 { 00:08:06.164 "name": "BaseBdev1", 00:08:06.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.164 "is_configured": false, 00:08:06.164 "data_offset": 0, 00:08:06.164 "data_size": 0 00:08:06.164 }, 00:08:06.164 { 00:08:06.164 "name": "BaseBdev2", 00:08:06.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.164 "is_configured": false, 00:08:06.164 "data_offset": 0, 00:08:06.164 "data_size": 0 00:08:06.164 } 00:08:06.164 ] 00:08:06.164 }' 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 [2024-11-15 11:19:49.412943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.730 [2024-11-15 11:19:49.413006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 [2024-11-15 11:19:49.420917] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.730 [2024-11-15 11:19:49.420999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.730 [2024-11-15 11:19:49.421014] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.730 [2024-11-15 11:19:49.421032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 [2024-11-15 11:19:49.465660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.730 BaseBdev1 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 [ 00:08:06.730 { 00:08:06.730 "name": "BaseBdev1", 00:08:06.730 "aliases": [ 00:08:06.730 "71291b2b-59d2-4622-8414-1e572d6ff49d" 00:08:06.730 ], 00:08:06.730 "product_name": "Malloc disk", 00:08:06.730 "block_size": 512, 00:08:06.730 "num_blocks": 65536, 00:08:06.730 "uuid": "71291b2b-59d2-4622-8414-1e572d6ff49d", 00:08:06.730 "assigned_rate_limits": { 00:08:06.730 "rw_ios_per_sec": 0, 00:08:06.730 "rw_mbytes_per_sec": 0, 00:08:06.730 "r_mbytes_per_sec": 0, 00:08:06.730 "w_mbytes_per_sec": 0 00:08:06.730 }, 00:08:06.730 "claimed": true, 00:08:06.730 "claim_type": "exclusive_write", 00:08:06.730 "zoned": false, 00:08:06.730 "supported_io_types": { 00:08:06.730 "read": true, 00:08:06.730 "write": true, 00:08:06.730 "unmap": true, 00:08:06.730 "flush": true, 00:08:06.730 "reset": true, 00:08:06.730 "nvme_admin": false, 00:08:06.730 "nvme_io": false, 00:08:06.730 "nvme_io_md": false, 00:08:06.730 "write_zeroes": true, 00:08:06.730 "zcopy": true, 00:08:06.730 "get_zone_info": false, 00:08:06.730 "zone_management": false, 00:08:06.730 "zone_append": false, 00:08:06.730 "compare": false, 00:08:06.730 "compare_and_write": false, 00:08:06.730 "abort": true, 00:08:06.730 "seek_hole": false, 00:08:06.730 "seek_data": false, 00:08:06.730 "copy": true, 00:08:06.730 "nvme_iov_md": false 00:08:06.730 }, 00:08:06.730 "memory_domains": [ 00:08:06.730 { 00:08:06.730 "dma_device_id": "system", 00:08:06.730 "dma_device_type": 1 00:08:06.730 }, 00:08:06.730 { 00:08:06.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.730 "dma_device_type": 2 00:08:06.730 } 00:08:06.730 ], 00:08:06.730 "driver_specific": {} 00:08:06.730 } 00:08:06.730 ] 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.730 "name": "Existed_Raid", 00:08:06.730 "uuid": "cdb34725-11d9-4d19-adfe-0c4dd66a7a95", 00:08:06.730 "strip_size_kb": 64, 00:08:06.730 "state": "configuring", 00:08:06.730 "raid_level": "raid0", 00:08:06.730 "superblock": true, 00:08:06.730 "num_base_bdevs": 2, 00:08:06.730 "num_base_bdevs_discovered": 1, 00:08:06.730 "num_base_bdevs_operational": 2, 00:08:06.730 "base_bdevs_list": [ 00:08:06.730 { 00:08:06.730 "name": "BaseBdev1", 00:08:06.730 "uuid": "71291b2b-59d2-4622-8414-1e572d6ff49d", 00:08:06.730 "is_configured": true, 00:08:06.730 "data_offset": 2048, 00:08:06.730 "data_size": 63488 00:08:06.730 }, 00:08:06.730 { 00:08:06.730 "name": "BaseBdev2", 00:08:06.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.730 "is_configured": false, 00:08:06.730 "data_offset": 0, 00:08:06.730 "data_size": 0 00:08:06.730 } 00:08:06.730 ] 00:08:06.730 }' 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.730 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.297 [2024-11-15 11:19:50.029906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.297 [2024-11-15 11:19:50.030016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.297 [2024-11-15 11:19:50.037945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.297 [2024-11-15 11:19:50.040642] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.297 [2024-11-15 11:19:50.040721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.297 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.298 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.298 "name": "Existed_Raid", 00:08:07.298 "uuid": "a8576438-e24e-41d8-be8c-7488ae07fd53", 00:08:07.298 "strip_size_kb": 64, 00:08:07.298 "state": "configuring", 00:08:07.298 "raid_level": "raid0", 00:08:07.298 "superblock": true, 00:08:07.298 "num_base_bdevs": 2, 00:08:07.298 "num_base_bdevs_discovered": 1, 00:08:07.298 "num_base_bdevs_operational": 2, 00:08:07.298 "base_bdevs_list": [ 00:08:07.298 { 00:08:07.298 "name": "BaseBdev1", 00:08:07.298 "uuid": "71291b2b-59d2-4622-8414-1e572d6ff49d", 00:08:07.298 "is_configured": true, 00:08:07.298 "data_offset": 2048, 00:08:07.298 "data_size": 63488 00:08:07.298 }, 00:08:07.298 { 00:08:07.298 "name": "BaseBdev2", 00:08:07.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.298 "is_configured": false, 00:08:07.298 "data_offset": 0, 00:08:07.298 "data_size": 0 00:08:07.298 } 00:08:07.298 ] 00:08:07.298 }' 00:08:07.298 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.298 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.865 [2024-11-15 11:19:50.584681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.865 [2024-11-15 11:19:50.584990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.865 [2024-11-15 11:19:50.585008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.865 [2024-11-15 11:19:50.585415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:07.865 BaseBdev2 00:08:07.865 [2024-11-15 11:19:50.585666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.865 [2024-11-15 11:19:50.585688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:07.865 [2024-11-15 11:19:50.585859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.865 [ 00:08:07.865 { 00:08:07.865 "name": "BaseBdev2", 00:08:07.865 "aliases": [ 00:08:07.865 "9a7fad28-e35c-43cf-86b7-33f573a0980a" 00:08:07.865 ], 00:08:07.865 "product_name": "Malloc disk", 00:08:07.865 "block_size": 512, 00:08:07.865 "num_blocks": 65536, 00:08:07.865 "uuid": "9a7fad28-e35c-43cf-86b7-33f573a0980a", 00:08:07.865 "assigned_rate_limits": { 00:08:07.865 "rw_ios_per_sec": 0, 00:08:07.865 "rw_mbytes_per_sec": 0, 00:08:07.865 "r_mbytes_per_sec": 0, 00:08:07.865 "w_mbytes_per_sec": 0 00:08:07.865 }, 00:08:07.865 "claimed": true, 00:08:07.865 "claim_type": "exclusive_write", 00:08:07.865 "zoned": false, 00:08:07.865 "supported_io_types": { 00:08:07.865 "read": true, 00:08:07.865 "write": true, 00:08:07.865 "unmap": true, 00:08:07.865 "flush": true, 00:08:07.865 "reset": true, 00:08:07.865 "nvme_admin": false, 00:08:07.865 "nvme_io": false, 00:08:07.865 "nvme_io_md": false, 00:08:07.865 "write_zeroes": true, 00:08:07.865 "zcopy": true, 00:08:07.865 "get_zone_info": false, 00:08:07.865 "zone_management": false, 00:08:07.865 "zone_append": false, 00:08:07.865 "compare": false, 00:08:07.865 "compare_and_write": false, 00:08:07.865 "abort": true, 00:08:07.865 "seek_hole": false, 00:08:07.865 "seek_data": false, 00:08:07.865 "copy": true, 00:08:07.865 "nvme_iov_md": false 00:08:07.865 }, 00:08:07.865 "memory_domains": [ 00:08:07.865 { 00:08:07.865 "dma_device_id": "system", 00:08:07.865 "dma_device_type": 1 00:08:07.865 }, 00:08:07.865 { 00:08:07.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.865 "dma_device_type": 2 00:08:07.865 } 00:08:07.865 ], 00:08:07.865 "driver_specific": {} 00:08:07.865 } 00:08:07.865 ] 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.865 "name": "Existed_Raid", 00:08:07.865 "uuid": "a8576438-e24e-41d8-be8c-7488ae07fd53", 00:08:07.865 "strip_size_kb": 64, 00:08:07.865 "state": "online", 00:08:07.865 "raid_level": "raid0", 00:08:07.865 "superblock": true, 00:08:07.865 "num_base_bdevs": 2, 00:08:07.865 "num_base_bdevs_discovered": 2, 00:08:07.865 "num_base_bdevs_operational": 2, 00:08:07.865 "base_bdevs_list": [ 00:08:07.865 { 00:08:07.865 "name": "BaseBdev1", 00:08:07.865 "uuid": "71291b2b-59d2-4622-8414-1e572d6ff49d", 00:08:07.865 "is_configured": true, 00:08:07.865 "data_offset": 2048, 00:08:07.865 "data_size": 63488 00:08:07.865 }, 00:08:07.865 { 00:08:07.865 "name": "BaseBdev2", 00:08:07.865 "uuid": "9a7fad28-e35c-43cf-86b7-33f573a0980a", 00:08:07.865 "is_configured": true, 00:08:07.865 "data_offset": 2048, 00:08:07.865 "data_size": 63488 00:08:07.865 } 00:08:07.865 ] 00:08:07.865 }' 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.865 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.432 [2024-11-15 11:19:51.141298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.432 "name": "Existed_Raid", 00:08:08.432 "aliases": [ 00:08:08.432 "a8576438-e24e-41d8-be8c-7488ae07fd53" 00:08:08.432 ], 00:08:08.432 "product_name": "Raid Volume", 00:08:08.432 "block_size": 512, 00:08:08.432 "num_blocks": 126976, 00:08:08.432 "uuid": "a8576438-e24e-41d8-be8c-7488ae07fd53", 00:08:08.432 "assigned_rate_limits": { 00:08:08.432 "rw_ios_per_sec": 0, 00:08:08.432 "rw_mbytes_per_sec": 0, 00:08:08.432 "r_mbytes_per_sec": 0, 00:08:08.432 "w_mbytes_per_sec": 0 00:08:08.432 }, 00:08:08.432 "claimed": false, 00:08:08.432 "zoned": false, 00:08:08.432 "supported_io_types": { 00:08:08.432 "read": true, 00:08:08.432 "write": true, 00:08:08.432 "unmap": true, 00:08:08.432 "flush": true, 00:08:08.432 "reset": true, 00:08:08.432 "nvme_admin": false, 00:08:08.432 "nvme_io": false, 00:08:08.432 "nvme_io_md": false, 00:08:08.432 "write_zeroes": true, 00:08:08.432 "zcopy": false, 00:08:08.432 "get_zone_info": false, 00:08:08.432 "zone_management": false, 00:08:08.432 "zone_append": false, 00:08:08.432 "compare": false, 00:08:08.432 "compare_and_write": false, 00:08:08.432 "abort": false, 00:08:08.432 "seek_hole": false, 00:08:08.432 "seek_data": false, 00:08:08.432 "copy": false, 00:08:08.432 "nvme_iov_md": false 00:08:08.432 }, 00:08:08.432 "memory_domains": [ 00:08:08.432 { 00:08:08.432 "dma_device_id": "system", 00:08:08.432 "dma_device_type": 1 00:08:08.432 }, 00:08:08.432 { 00:08:08.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.432 "dma_device_type": 2 00:08:08.432 }, 00:08:08.432 { 00:08:08.432 "dma_device_id": "system", 00:08:08.432 "dma_device_type": 1 00:08:08.432 }, 00:08:08.432 { 00:08:08.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.432 "dma_device_type": 2 00:08:08.432 } 00:08:08.432 ], 00:08:08.432 "driver_specific": { 00:08:08.432 "raid": { 00:08:08.432 "uuid": "a8576438-e24e-41d8-be8c-7488ae07fd53", 00:08:08.432 "strip_size_kb": 64, 00:08:08.432 "state": "online", 00:08:08.432 "raid_level": "raid0", 00:08:08.432 "superblock": true, 00:08:08.432 "num_base_bdevs": 2, 00:08:08.432 "num_base_bdevs_discovered": 2, 00:08:08.432 "num_base_bdevs_operational": 2, 00:08:08.432 "base_bdevs_list": [ 00:08:08.432 { 00:08:08.432 "name": "BaseBdev1", 00:08:08.432 "uuid": "71291b2b-59d2-4622-8414-1e572d6ff49d", 00:08:08.432 "is_configured": true, 00:08:08.432 "data_offset": 2048, 00:08:08.432 "data_size": 63488 00:08:08.432 }, 00:08:08.432 { 00:08:08.432 "name": "BaseBdev2", 00:08:08.432 "uuid": "9a7fad28-e35c-43cf-86b7-33f573a0980a", 00:08:08.432 "is_configured": true, 00:08:08.432 "data_offset": 2048, 00:08:08.432 "data_size": 63488 00:08:08.432 } 00:08:08.432 ] 00:08:08.432 } 00:08:08.432 } 00:08:08.432 }' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:08.432 BaseBdev2' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.432 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.433 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.433 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.690 [2024-11-15 11:19:51.400984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.690 [2024-11-15 11:19:51.401044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.690 [2024-11-15 11:19:51.401115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.690 "name": "Existed_Raid", 00:08:08.690 "uuid": "a8576438-e24e-41d8-be8c-7488ae07fd53", 00:08:08.690 "strip_size_kb": 64, 00:08:08.690 "state": "offline", 00:08:08.690 "raid_level": "raid0", 00:08:08.690 "superblock": true, 00:08:08.690 "num_base_bdevs": 2, 00:08:08.690 "num_base_bdevs_discovered": 1, 00:08:08.690 "num_base_bdevs_operational": 1, 00:08:08.690 "base_bdevs_list": [ 00:08:08.690 { 00:08:08.690 "name": null, 00:08:08.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.690 "is_configured": false, 00:08:08.690 "data_offset": 0, 00:08:08.690 "data_size": 63488 00:08:08.690 }, 00:08:08.690 { 00:08:08.690 "name": "BaseBdev2", 00:08:08.690 "uuid": "9a7fad28-e35c-43cf-86b7-33f573a0980a", 00:08:08.690 "is_configured": true, 00:08:08.690 "data_offset": 2048, 00:08:08.690 "data_size": 63488 00:08:08.690 } 00:08:08.690 ] 00:08:08.690 }' 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.690 11:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.256 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:09.256 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.257 [2024-11-15 11:19:52.071932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.257 [2024-11-15 11:19:52.072025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.257 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60757 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60757 ']' 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60757 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60757 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.515 killing process with pid 60757 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60757' 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60757 00:08:09.515 [2024-11-15 11:19:52.249903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.515 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60757 00:08:09.515 [2024-11-15 11:19:52.265140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.450 11:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:10.450 00:08:10.450 real 0m5.544s 00:08:10.450 user 0m8.397s 00:08:10.450 sys 0m0.834s 00:08:10.450 11:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.450 11:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.450 ************************************ 00:08:10.450 END TEST raid_state_function_test_sb 00:08:10.450 ************************************ 00:08:10.450 11:19:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:10.451 11:19:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:10.451 11:19:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.451 11:19:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.451 ************************************ 00:08:10.451 START TEST raid_superblock_test 00:08:10.451 ************************************ 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61015 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61015 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61015 ']' 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:10.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:10.451 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.709 [2024-11-15 11:19:53.468403] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:10.709 [2024-11-15 11:19:53.469613] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61015 ] 00:08:10.968 [2024-11-15 11:19:53.659257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.968 [2024-11-15 11:19:53.803216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.227 [2024-11-15 11:19:54.024785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.227 [2024-11-15 11:19:54.024829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.550 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.831 malloc1 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.831 [2024-11-15 11:19:54.500141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:11.831 [2024-11-15 11:19:54.500286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.831 [2024-11-15 11:19:54.500320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:11.831 [2024-11-15 11:19:54.500350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.831 [2024-11-15 11:19:54.503773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.831 [2024-11-15 11:19:54.503817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:11.831 pt1 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.831 malloc2 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.831 [2024-11-15 11:19:54.558825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:11.831 [2024-11-15 11:19:54.559082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.831 [2024-11-15 11:19:54.559131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:11.831 [2024-11-15 11:19:54.559146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.831 [2024-11-15 11:19:54.562082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.831 [2024-11-15 11:19:54.562265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:11.831 pt2 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.831 [2024-11-15 11:19:54.570952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:11.831 [2024-11-15 11:19:54.573502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:11.831 [2024-11-15 11:19:54.573711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:11.831 [2024-11-15 11:19:54.573729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.831 [2024-11-15 11:19:54.574034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:11.831 [2024-11-15 11:19:54.574254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:11.831 [2024-11-15 11:19:54.574275] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:11.831 [2024-11-15 11:19:54.574499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:11.831 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.832 "name": "raid_bdev1", 00:08:11.832 "uuid": "3d8e2343-6c59-462b-bc28-27663be31a32", 00:08:11.832 "strip_size_kb": 64, 00:08:11.832 "state": "online", 00:08:11.832 "raid_level": "raid0", 00:08:11.832 "superblock": true, 00:08:11.832 "num_base_bdevs": 2, 00:08:11.832 "num_base_bdevs_discovered": 2, 00:08:11.832 "num_base_bdevs_operational": 2, 00:08:11.832 "base_bdevs_list": [ 00:08:11.832 { 00:08:11.832 "name": "pt1", 00:08:11.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.832 "is_configured": true, 00:08:11.832 "data_offset": 2048, 00:08:11.832 "data_size": 63488 00:08:11.832 }, 00:08:11.832 { 00:08:11.832 "name": "pt2", 00:08:11.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.832 "is_configured": true, 00:08:11.832 "data_offset": 2048, 00:08:11.832 "data_size": 63488 00:08:11.832 } 00:08:11.832 ] 00:08:11.832 }' 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.832 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.397 [2024-11-15 11:19:55.103546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.397 "name": "raid_bdev1", 00:08:12.397 "aliases": [ 00:08:12.397 "3d8e2343-6c59-462b-bc28-27663be31a32" 00:08:12.397 ], 00:08:12.397 "product_name": "Raid Volume", 00:08:12.397 "block_size": 512, 00:08:12.397 "num_blocks": 126976, 00:08:12.397 "uuid": "3d8e2343-6c59-462b-bc28-27663be31a32", 00:08:12.397 "assigned_rate_limits": { 00:08:12.397 "rw_ios_per_sec": 0, 00:08:12.397 "rw_mbytes_per_sec": 0, 00:08:12.397 "r_mbytes_per_sec": 0, 00:08:12.397 "w_mbytes_per_sec": 0 00:08:12.397 }, 00:08:12.397 "claimed": false, 00:08:12.397 "zoned": false, 00:08:12.397 "supported_io_types": { 00:08:12.397 "read": true, 00:08:12.397 "write": true, 00:08:12.397 "unmap": true, 00:08:12.397 "flush": true, 00:08:12.397 "reset": true, 00:08:12.397 "nvme_admin": false, 00:08:12.397 "nvme_io": false, 00:08:12.397 "nvme_io_md": false, 00:08:12.397 "write_zeroes": true, 00:08:12.397 "zcopy": false, 00:08:12.397 "get_zone_info": false, 00:08:12.397 "zone_management": false, 00:08:12.397 "zone_append": false, 00:08:12.397 "compare": false, 00:08:12.397 "compare_and_write": false, 00:08:12.397 "abort": false, 00:08:12.397 "seek_hole": false, 00:08:12.397 "seek_data": false, 00:08:12.397 "copy": false, 00:08:12.397 "nvme_iov_md": false 00:08:12.397 }, 00:08:12.397 "memory_domains": [ 00:08:12.397 { 00:08:12.397 "dma_device_id": "system", 00:08:12.397 "dma_device_type": 1 00:08:12.397 }, 00:08:12.397 { 00:08:12.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.397 "dma_device_type": 2 00:08:12.397 }, 00:08:12.397 { 00:08:12.397 "dma_device_id": "system", 00:08:12.397 "dma_device_type": 1 00:08:12.397 }, 00:08:12.397 { 00:08:12.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.397 "dma_device_type": 2 00:08:12.397 } 00:08:12.397 ], 00:08:12.397 "driver_specific": { 00:08:12.397 "raid": { 00:08:12.397 "uuid": "3d8e2343-6c59-462b-bc28-27663be31a32", 00:08:12.397 "strip_size_kb": 64, 00:08:12.397 "state": "online", 00:08:12.397 "raid_level": "raid0", 00:08:12.397 "superblock": true, 00:08:12.397 "num_base_bdevs": 2, 00:08:12.397 "num_base_bdevs_discovered": 2, 00:08:12.397 "num_base_bdevs_operational": 2, 00:08:12.397 "base_bdevs_list": [ 00:08:12.397 { 00:08:12.397 "name": "pt1", 00:08:12.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.397 "is_configured": true, 00:08:12.397 "data_offset": 2048, 00:08:12.397 "data_size": 63488 00:08:12.397 }, 00:08:12.397 { 00:08:12.397 "name": "pt2", 00:08:12.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.397 "is_configured": true, 00:08:12.397 "data_offset": 2048, 00:08:12.397 "data_size": 63488 00:08:12.397 } 00:08:12.397 ] 00:08:12.397 } 00:08:12.397 } 00:08:12.397 }' 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:12.397 pt2' 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.397 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.656 [2024-11-15 11:19:55.371476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3d8e2343-6c59-462b-bc28-27663be31a32 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3d8e2343-6c59-462b-bc28-27663be31a32 ']' 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.656 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.657 [2024-11-15 11:19:55.419097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.657 [2024-11-15 11:19:55.419337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.657 [2024-11-15 11:19:55.419463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.657 [2024-11-15 11:19:55.419548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.657 [2024-11-15 11:19:55.419568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.657 [2024-11-15 11:19:55.563193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:12.657 [2024-11-15 11:19:55.566071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:12.657 [2024-11-15 11:19:55.566166] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:12.657 [2024-11-15 11:19:55.566267] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:12.657 [2024-11-15 11:19:55.566319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.657 [2024-11-15 11:19:55.566339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:12.657 request: 00:08:12.657 { 00:08:12.657 "name": "raid_bdev1", 00:08:12.657 "raid_level": "raid0", 00:08:12.657 "base_bdevs": [ 00:08:12.657 "malloc1", 00:08:12.657 "malloc2" 00:08:12.657 ], 00:08:12.657 "strip_size_kb": 64, 00:08:12.657 "superblock": false, 00:08:12.657 "method": "bdev_raid_create", 00:08:12.657 "req_id": 1 00:08:12.657 } 00:08:12.657 Got JSON-RPC error response 00:08:12.657 response: 00:08:12.657 { 00:08:12.657 "code": -17, 00:08:12.657 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:12.657 } 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.657 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.915 [2024-11-15 11:19:55.631316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.915 [2024-11-15 11:19:55.631443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.915 [2024-11-15 11:19:55.631474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:12.915 [2024-11-15 11:19:55.631491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.915 [2024-11-15 11:19:55.634982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.915 [2024-11-15 11:19:55.635229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.915 [2024-11-15 11:19:55.635367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:12.915 [2024-11-15 11:19:55.635452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.915 pt1 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.915 "name": "raid_bdev1", 00:08:12.915 "uuid": "3d8e2343-6c59-462b-bc28-27663be31a32", 00:08:12.915 "strip_size_kb": 64, 00:08:12.915 "state": "configuring", 00:08:12.915 "raid_level": "raid0", 00:08:12.915 "superblock": true, 00:08:12.915 "num_base_bdevs": 2, 00:08:12.915 "num_base_bdevs_discovered": 1, 00:08:12.915 "num_base_bdevs_operational": 2, 00:08:12.915 "base_bdevs_list": [ 00:08:12.915 { 00:08:12.915 "name": "pt1", 00:08:12.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.915 "is_configured": true, 00:08:12.915 "data_offset": 2048, 00:08:12.915 "data_size": 63488 00:08:12.915 }, 00:08:12.915 { 00:08:12.915 "name": null, 00:08:12.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.915 "is_configured": false, 00:08:12.915 "data_offset": 2048, 00:08:12.915 "data_size": 63488 00:08:12.915 } 00:08:12.915 ] 00:08:12.915 }' 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.915 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.481 [2024-11-15 11:19:56.175734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.481 [2024-11-15 11:19:56.175867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.481 [2024-11-15 11:19:56.175902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:13.481 [2024-11-15 11:19:56.175920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.481 [2024-11-15 11:19:56.176670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.481 [2024-11-15 11:19:56.176712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.481 [2024-11-15 11:19:56.176825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:13.481 [2024-11-15 11:19:56.176872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.481 [2024-11-15 11:19:56.177033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.481 [2024-11-15 11:19:56.177055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:13.481 [2024-11-15 11:19:56.177425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:13.481 [2024-11-15 11:19:56.177639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.481 [2024-11-15 11:19:56.177668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:13.481 [2024-11-15 11:19:56.177847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.481 pt2 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.481 "name": "raid_bdev1", 00:08:13.481 "uuid": "3d8e2343-6c59-462b-bc28-27663be31a32", 00:08:13.481 "strip_size_kb": 64, 00:08:13.481 "state": "online", 00:08:13.481 "raid_level": "raid0", 00:08:13.481 "superblock": true, 00:08:13.481 "num_base_bdevs": 2, 00:08:13.481 "num_base_bdevs_discovered": 2, 00:08:13.481 "num_base_bdevs_operational": 2, 00:08:13.481 "base_bdevs_list": [ 00:08:13.481 { 00:08:13.481 "name": "pt1", 00:08:13.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.481 "is_configured": true, 00:08:13.481 "data_offset": 2048, 00:08:13.481 "data_size": 63488 00:08:13.481 }, 00:08:13.481 { 00:08:13.481 "name": "pt2", 00:08:13.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.481 "is_configured": true, 00:08:13.481 "data_offset": 2048, 00:08:13.481 "data_size": 63488 00:08:13.481 } 00:08:13.481 ] 00:08:13.481 }' 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.481 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.048 [2024-11-15 11:19:56.724138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.048 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.048 "name": "raid_bdev1", 00:08:14.048 "aliases": [ 00:08:14.048 "3d8e2343-6c59-462b-bc28-27663be31a32" 00:08:14.048 ], 00:08:14.048 "product_name": "Raid Volume", 00:08:14.048 "block_size": 512, 00:08:14.048 "num_blocks": 126976, 00:08:14.048 "uuid": "3d8e2343-6c59-462b-bc28-27663be31a32", 00:08:14.048 "assigned_rate_limits": { 00:08:14.048 "rw_ios_per_sec": 0, 00:08:14.048 "rw_mbytes_per_sec": 0, 00:08:14.048 "r_mbytes_per_sec": 0, 00:08:14.048 "w_mbytes_per_sec": 0 00:08:14.048 }, 00:08:14.048 "claimed": false, 00:08:14.048 "zoned": false, 00:08:14.048 "supported_io_types": { 00:08:14.048 "read": true, 00:08:14.048 "write": true, 00:08:14.048 "unmap": true, 00:08:14.048 "flush": true, 00:08:14.048 "reset": true, 00:08:14.048 "nvme_admin": false, 00:08:14.048 "nvme_io": false, 00:08:14.048 "nvme_io_md": false, 00:08:14.048 "write_zeroes": true, 00:08:14.048 "zcopy": false, 00:08:14.048 "get_zone_info": false, 00:08:14.048 "zone_management": false, 00:08:14.048 "zone_append": false, 00:08:14.048 "compare": false, 00:08:14.048 "compare_and_write": false, 00:08:14.048 "abort": false, 00:08:14.048 "seek_hole": false, 00:08:14.048 "seek_data": false, 00:08:14.048 "copy": false, 00:08:14.048 "nvme_iov_md": false 00:08:14.048 }, 00:08:14.048 "memory_domains": [ 00:08:14.048 { 00:08:14.048 "dma_device_id": "system", 00:08:14.048 "dma_device_type": 1 00:08:14.048 }, 00:08:14.048 { 00:08:14.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.048 "dma_device_type": 2 00:08:14.048 }, 00:08:14.048 { 00:08:14.048 "dma_device_id": "system", 00:08:14.048 "dma_device_type": 1 00:08:14.048 }, 00:08:14.048 { 00:08:14.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.048 "dma_device_type": 2 00:08:14.048 } 00:08:14.048 ], 00:08:14.048 "driver_specific": { 00:08:14.048 "raid": { 00:08:14.048 "uuid": "3d8e2343-6c59-462b-bc28-27663be31a32", 00:08:14.048 "strip_size_kb": 64, 00:08:14.048 "state": "online", 00:08:14.048 "raid_level": "raid0", 00:08:14.048 "superblock": true, 00:08:14.048 "num_base_bdevs": 2, 00:08:14.049 "num_base_bdevs_discovered": 2, 00:08:14.049 "num_base_bdevs_operational": 2, 00:08:14.049 "base_bdevs_list": [ 00:08:14.049 { 00:08:14.049 "name": "pt1", 00:08:14.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.049 "is_configured": true, 00:08:14.049 "data_offset": 2048, 00:08:14.049 "data_size": 63488 00:08:14.049 }, 00:08:14.049 { 00:08:14.049 "name": "pt2", 00:08:14.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.049 "is_configured": true, 00:08:14.049 "data_offset": 2048, 00:08:14.049 "data_size": 63488 00:08:14.049 } 00:08:14.049 ] 00:08:14.049 } 00:08:14.049 } 00:08:14.049 }' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:14.049 pt2' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.049 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.049 [2024-11-15 11:19:56.984049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3d8e2343-6c59-462b-bc28-27663be31a32 '!=' 3d8e2343-6c59-462b-bc28-27663be31a32 ']' 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61015 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61015 ']' 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61015 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61015 00:08:14.307 killing process with pid 61015 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61015' 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61015 00:08:14.307 [2024-11-15 11:19:57.058342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.307 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61015 00:08:14.307 [2024-11-15 11:19:57.058488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.307 [2024-11-15 11:19:57.058573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.307 [2024-11-15 11:19:57.058610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:14.307 [2024-11-15 11:19:57.236351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.683 11:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:15.683 00:08:15.683 real 0m4.906s 00:08:15.683 user 0m7.166s 00:08:15.683 sys 0m0.810s 00:08:15.683 11:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.683 11:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.683 ************************************ 00:08:15.683 END TEST raid_superblock_test 00:08:15.683 ************************************ 00:08:15.683 11:19:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:15.683 11:19:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:15.683 11:19:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.683 11:19:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.683 ************************************ 00:08:15.683 START TEST raid_read_error_test 00:08:15.683 ************************************ 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:15.683 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dTTb30OeLL 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61232 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61232 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61232 ']' 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.684 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.684 [2024-11-15 11:19:58.416250] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:15.684 [2024-11-15 11:19:58.416442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61232 ] 00:08:15.684 [2024-11-15 11:19:58.588064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.941 [2024-11-15 11:19:58.718801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.199 [2024-11-15 11:19:58.914615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.199 [2024-11-15 11:19:58.914703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 BaseBdev1_malloc 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 true 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 [2024-11-15 11:19:59.496128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:16.765 [2024-11-15 11:19:59.496262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.765 [2024-11-15 11:19:59.496295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:16.765 [2024-11-15 11:19:59.496313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.765 [2024-11-15 11:19:59.499149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.765 [2024-11-15 11:19:59.499238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:16.765 BaseBdev1 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 BaseBdev2_malloc 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 true 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 [2024-11-15 11:19:59.560398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:16.765 [2024-11-15 11:19:59.560497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.765 [2024-11-15 11:19:59.560538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:16.765 [2024-11-15 11:19:59.560555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.765 [2024-11-15 11:19:59.563613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.765 [2024-11-15 11:19:59.563689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:16.765 BaseBdev2 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 [2024-11-15 11:19:59.568571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.765 [2024-11-15 11:19:59.571318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.765 [2024-11-15 11:19:59.571625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:16.765 [2024-11-15 11:19:59.571653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:16.765 [2024-11-15 11:19:59.571951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:16.765 [2024-11-15 11:19:59.572187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:16.765 [2024-11-15 11:19:59.572224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:16.765 [2024-11-15 11:19:59.572481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.765 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.765 "name": "raid_bdev1", 00:08:16.765 "uuid": "a45cef65-abb2-49f5-a1a1-81417ec5a0a2", 00:08:16.765 "strip_size_kb": 64, 00:08:16.765 "state": "online", 00:08:16.765 "raid_level": "raid0", 00:08:16.765 "superblock": true, 00:08:16.765 "num_base_bdevs": 2, 00:08:16.765 "num_base_bdevs_discovered": 2, 00:08:16.766 "num_base_bdevs_operational": 2, 00:08:16.766 "base_bdevs_list": [ 00:08:16.766 { 00:08:16.766 "name": "BaseBdev1", 00:08:16.766 "uuid": "db6f0851-2df0-50cc-8983-a088538430aa", 00:08:16.766 "is_configured": true, 00:08:16.766 "data_offset": 2048, 00:08:16.766 "data_size": 63488 00:08:16.766 }, 00:08:16.766 { 00:08:16.766 "name": "BaseBdev2", 00:08:16.766 "uuid": "804e8948-b4dd-53d5-a9f3-35bad4fffa66", 00:08:16.766 "is_configured": true, 00:08:16.766 "data_offset": 2048, 00:08:16.766 "data_size": 63488 00:08:16.766 } 00:08:16.766 ] 00:08:16.766 }' 00:08:16.766 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.766 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.331 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.331 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.331 [2024-11-15 11:20:00.198404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.265 "name": "raid_bdev1", 00:08:18.265 "uuid": "a45cef65-abb2-49f5-a1a1-81417ec5a0a2", 00:08:18.265 "strip_size_kb": 64, 00:08:18.265 "state": "online", 00:08:18.265 "raid_level": "raid0", 00:08:18.265 "superblock": true, 00:08:18.265 "num_base_bdevs": 2, 00:08:18.265 "num_base_bdevs_discovered": 2, 00:08:18.265 "num_base_bdevs_operational": 2, 00:08:18.265 "base_bdevs_list": [ 00:08:18.265 { 00:08:18.265 "name": "BaseBdev1", 00:08:18.265 "uuid": "db6f0851-2df0-50cc-8983-a088538430aa", 00:08:18.265 "is_configured": true, 00:08:18.265 "data_offset": 2048, 00:08:18.265 "data_size": 63488 00:08:18.265 }, 00:08:18.265 { 00:08:18.265 "name": "BaseBdev2", 00:08:18.265 "uuid": "804e8948-b4dd-53d5-a9f3-35bad4fffa66", 00:08:18.265 "is_configured": true, 00:08:18.265 "data_offset": 2048, 00:08:18.265 "data_size": 63488 00:08:18.265 } 00:08:18.265 ] 00:08:18.265 }' 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.265 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.832 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.832 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.833 [2024-11-15 11:20:01.662271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.833 [2024-11-15 11:20:01.662342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.833 [2024-11-15 11:20:01.665503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.833 [2024-11-15 11:20:01.665593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.833 [2024-11-15 11:20:01.665637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.833 [2024-11-15 11:20:01.665654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:18.833 { 00:08:18.833 "results": [ 00:08:18.833 { 00:08:18.833 "job": "raid_bdev1", 00:08:18.833 "core_mask": "0x1", 00:08:18.833 "workload": "randrw", 00:08:18.833 "percentage": 50, 00:08:18.833 "status": "finished", 00:08:18.833 "queue_depth": 1, 00:08:18.833 "io_size": 131072, 00:08:18.833 "runtime": 1.46149, 00:08:18.833 "iops": 10491.347871008355, 00:08:18.833 "mibps": 1311.4184838760443, 00:08:18.833 "io_failed": 1, 00:08:18.833 "io_timeout": 0, 00:08:18.833 "avg_latency_us": 133.0025967250436, 00:08:18.833 "min_latency_us": 37.236363636363635, 00:08:18.833 "max_latency_us": 1690.530909090909 00:08:18.833 } 00:08:18.833 ], 00:08:18.833 "core_count": 1 00:08:18.833 } 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61232 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61232 ']' 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61232 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61232 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61232' 00:08:18.833 killing process with pid 61232 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61232 00:08:18.833 [2024-11-15 11:20:01.708815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.833 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61232 00:08:19.093 [2024-11-15 11:20:01.825479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dTTb30OeLL 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:08:20.033 00:08:20.033 real 0m4.622s 00:08:20.033 user 0m5.772s 00:08:20.033 sys 0m0.620s 00:08:20.033 ************************************ 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.033 11:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.033 END TEST raid_read_error_test 00:08:20.033 ************************************ 00:08:20.292 11:20:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:20.292 11:20:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:20.292 11:20:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.292 11:20:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.292 ************************************ 00:08:20.292 START TEST raid_write_error_test 00:08:20.292 ************************************ 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.292 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ib0Dtj65Fe 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61372 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61372 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61372 ']' 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.292 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.292 [2024-11-15 11:20:03.118655] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:20.292 [2024-11-15 11:20:03.118876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61372 ] 00:08:20.550 [2024-11-15 11:20:03.310222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.550 [2024-11-15 11:20:03.447960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.809 [2024-11-15 11:20:03.667695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.809 [2024-11-15 11:20:03.667777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.376 BaseBdev1_malloc 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.376 true 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.376 [2024-11-15 11:20:04.131099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:21.376 [2024-11-15 11:20:04.131187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.376 [2024-11-15 11:20:04.131222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:21.376 [2024-11-15 11:20:04.131241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.376 [2024-11-15 11:20:04.134351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.376 [2024-11-15 11:20:04.134432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:21.376 BaseBdev1 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.376 BaseBdev2_malloc 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.376 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.377 true 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.377 [2024-11-15 11:20:04.199570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:21.377 [2024-11-15 11:20:04.199654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.377 [2024-11-15 11:20:04.199680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:21.377 [2024-11-15 11:20:04.199707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.377 [2024-11-15 11:20:04.202877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.377 [2024-11-15 11:20:04.202943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:21.377 BaseBdev2 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.377 [2024-11-15 11:20:04.207817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.377 [2024-11-15 11:20:04.210783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.377 [2024-11-15 11:20:04.211075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.377 [2024-11-15 11:20:04.211102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:21.377 [2024-11-15 11:20:04.211412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:21.377 [2024-11-15 11:20:04.211649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.377 [2024-11-15 11:20:04.211671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:21.377 [2024-11-15 11:20:04.211909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.377 "name": "raid_bdev1", 00:08:21.377 "uuid": "a615391e-e388-434f-8ac5-de4e0aa4d103", 00:08:21.377 "strip_size_kb": 64, 00:08:21.377 "state": "online", 00:08:21.377 "raid_level": "raid0", 00:08:21.377 "superblock": true, 00:08:21.377 "num_base_bdevs": 2, 00:08:21.377 "num_base_bdevs_discovered": 2, 00:08:21.377 "num_base_bdevs_operational": 2, 00:08:21.377 "base_bdevs_list": [ 00:08:21.377 { 00:08:21.377 "name": "BaseBdev1", 00:08:21.377 "uuid": "451956ec-2f59-513b-b5e3-9cbab0e23313", 00:08:21.377 "is_configured": true, 00:08:21.377 "data_offset": 2048, 00:08:21.377 "data_size": 63488 00:08:21.377 }, 00:08:21.377 { 00:08:21.377 "name": "BaseBdev2", 00:08:21.377 "uuid": "c7120435-cf19-5fdd-aa01-ce9fc6d52dd6", 00:08:21.377 "is_configured": true, 00:08:21.377 "data_offset": 2048, 00:08:21.377 "data_size": 63488 00:08:21.377 } 00:08:21.377 ] 00:08:21.377 }' 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.377 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.943 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:21.943 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:21.943 [2024-11-15 11:20:04.853416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.879 "name": "raid_bdev1", 00:08:22.879 "uuid": "a615391e-e388-434f-8ac5-de4e0aa4d103", 00:08:22.879 "strip_size_kb": 64, 00:08:22.879 "state": "online", 00:08:22.879 "raid_level": "raid0", 00:08:22.879 "superblock": true, 00:08:22.879 "num_base_bdevs": 2, 00:08:22.879 "num_base_bdevs_discovered": 2, 00:08:22.879 "num_base_bdevs_operational": 2, 00:08:22.879 "base_bdevs_list": [ 00:08:22.879 { 00:08:22.879 "name": "BaseBdev1", 00:08:22.879 "uuid": "451956ec-2f59-513b-b5e3-9cbab0e23313", 00:08:22.879 "is_configured": true, 00:08:22.879 "data_offset": 2048, 00:08:22.879 "data_size": 63488 00:08:22.879 }, 00:08:22.879 { 00:08:22.879 "name": "BaseBdev2", 00:08:22.879 "uuid": "c7120435-cf19-5fdd-aa01-ce9fc6d52dd6", 00:08:22.879 "is_configured": true, 00:08:22.879 "data_offset": 2048, 00:08:22.879 "data_size": 63488 00:08:22.879 } 00:08:22.879 ] 00:08:22.879 }' 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.879 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.457 [2024-11-15 11:20:06.300155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.457 [2024-11-15 11:20:06.300258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.457 [2024-11-15 11:20:06.303620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.457 [2024-11-15 11:20:06.303692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.457 [2024-11-15 11:20:06.303736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.457 [2024-11-15 11:20:06.303753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:23.457 { 00:08:23.457 "results": [ 00:08:23.457 { 00:08:23.457 "job": "raid_bdev1", 00:08:23.457 "core_mask": "0x1", 00:08:23.457 "workload": "randrw", 00:08:23.457 "percentage": 50, 00:08:23.457 "status": "finished", 00:08:23.457 "queue_depth": 1, 00:08:23.457 "io_size": 131072, 00:08:23.457 "runtime": 1.44453, 00:08:23.457 "iops": 10601.37207257724, 00:08:23.457 "mibps": 1325.171509072155, 00:08:23.457 "io_failed": 1, 00:08:23.457 "io_timeout": 0, 00:08:23.457 "avg_latency_us": 132.17748113851542, 00:08:23.457 "min_latency_us": 37.00363636363636, 00:08:23.457 "max_latency_us": 1936.290909090909 00:08:23.457 } 00:08:23.457 ], 00:08:23.457 "core_count": 1 00:08:23.457 } 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61372 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61372 ']' 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61372 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61372 00:08:23.457 killing process with pid 61372 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61372' 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61372 00:08:23.457 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61372 00:08:23.457 [2024-11-15 11:20:06.343679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.720 [2024-11-15 11:20:06.458961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ib0Dtj65Fe 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:24.654 ************************************ 00:08:24.654 END TEST raid_write_error_test 00:08:24.654 ************************************ 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:08:24.654 00:08:24.654 real 0m4.541s 00:08:24.654 user 0m5.626s 00:08:24.654 sys 0m0.625s 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.654 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.654 11:20:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:24.654 11:20:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:24.654 11:20:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:24.654 11:20:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.654 11:20:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.654 ************************************ 00:08:24.654 START TEST raid_state_function_test 00:08:24.654 ************************************ 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:24.654 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:24.655 Process raid pid: 61516 00:08:24.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61516 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61516' 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61516 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61516 ']' 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.655 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.929 [2024-11-15 11:20:07.708860] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:24.929 [2024-11-15 11:20:07.709431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.187 [2024-11-15 11:20:07.897336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.187 [2024-11-15 11:20:08.031888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.445 [2024-11-15 11:20:08.254134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.445 [2024-11-15 11:20:08.254443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.011 [2024-11-15 11:20:08.703439] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.011 [2024-11-15 11:20:08.703511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.011 [2024-11-15 11:20:08.703530] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.011 [2024-11-15 11:20:08.703547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.011 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.012 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.012 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.012 "name": "Existed_Raid", 00:08:26.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.012 "strip_size_kb": 64, 00:08:26.012 "state": "configuring", 00:08:26.012 "raid_level": "concat", 00:08:26.012 "superblock": false, 00:08:26.012 "num_base_bdevs": 2, 00:08:26.012 "num_base_bdevs_discovered": 0, 00:08:26.012 "num_base_bdevs_operational": 2, 00:08:26.012 "base_bdevs_list": [ 00:08:26.012 { 00:08:26.012 "name": "BaseBdev1", 00:08:26.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.012 "is_configured": false, 00:08:26.012 "data_offset": 0, 00:08:26.012 "data_size": 0 00:08:26.012 }, 00:08:26.012 { 00:08:26.012 "name": "BaseBdev2", 00:08:26.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.012 "is_configured": false, 00:08:26.012 "data_offset": 0, 00:08:26.012 "data_size": 0 00:08:26.012 } 00:08:26.012 ] 00:08:26.012 }' 00:08:26.012 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.012 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.578 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 [2024-11-15 11:20:09.239631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.579 [2024-11-15 11:20:09.239680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 [2024-11-15 11:20:09.247584] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.579 [2024-11-15 11:20:09.247643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.579 [2024-11-15 11:20:09.247673] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.579 [2024-11-15 11:20:09.247692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 [2024-11-15 11:20:09.298798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.579 BaseBdev1 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 [ 00:08:26.579 { 00:08:26.579 "name": "BaseBdev1", 00:08:26.579 "aliases": [ 00:08:26.579 "901346ad-8c3a-40b4-8865-e99558d865e2" 00:08:26.579 ], 00:08:26.579 "product_name": "Malloc disk", 00:08:26.579 "block_size": 512, 00:08:26.579 "num_blocks": 65536, 00:08:26.579 "uuid": "901346ad-8c3a-40b4-8865-e99558d865e2", 00:08:26.579 "assigned_rate_limits": { 00:08:26.579 "rw_ios_per_sec": 0, 00:08:26.579 "rw_mbytes_per_sec": 0, 00:08:26.579 "r_mbytes_per_sec": 0, 00:08:26.579 "w_mbytes_per_sec": 0 00:08:26.579 }, 00:08:26.579 "claimed": true, 00:08:26.579 "claim_type": "exclusive_write", 00:08:26.579 "zoned": false, 00:08:26.579 "supported_io_types": { 00:08:26.579 "read": true, 00:08:26.579 "write": true, 00:08:26.579 "unmap": true, 00:08:26.579 "flush": true, 00:08:26.579 "reset": true, 00:08:26.579 "nvme_admin": false, 00:08:26.579 "nvme_io": false, 00:08:26.579 "nvme_io_md": false, 00:08:26.579 "write_zeroes": true, 00:08:26.579 "zcopy": true, 00:08:26.579 "get_zone_info": false, 00:08:26.579 "zone_management": false, 00:08:26.579 "zone_append": false, 00:08:26.579 "compare": false, 00:08:26.579 "compare_and_write": false, 00:08:26.579 "abort": true, 00:08:26.579 "seek_hole": false, 00:08:26.579 "seek_data": false, 00:08:26.579 "copy": true, 00:08:26.579 "nvme_iov_md": false 00:08:26.579 }, 00:08:26.579 "memory_domains": [ 00:08:26.579 { 00:08:26.579 "dma_device_id": "system", 00:08:26.579 "dma_device_type": 1 00:08:26.579 }, 00:08:26.579 { 00:08:26.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.579 "dma_device_type": 2 00:08:26.579 } 00:08:26.579 ], 00:08:26.579 "driver_specific": {} 00:08:26.579 } 00:08:26.579 ] 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.579 "name": "Existed_Raid", 00:08:26.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.579 "strip_size_kb": 64, 00:08:26.579 "state": "configuring", 00:08:26.579 "raid_level": "concat", 00:08:26.579 "superblock": false, 00:08:26.579 "num_base_bdevs": 2, 00:08:26.579 "num_base_bdevs_discovered": 1, 00:08:26.579 "num_base_bdevs_operational": 2, 00:08:26.579 "base_bdevs_list": [ 00:08:26.579 { 00:08:26.579 "name": "BaseBdev1", 00:08:26.579 "uuid": "901346ad-8c3a-40b4-8865-e99558d865e2", 00:08:26.579 "is_configured": true, 00:08:26.579 "data_offset": 0, 00:08:26.579 "data_size": 65536 00:08:26.579 }, 00:08:26.579 { 00:08:26.579 "name": "BaseBdev2", 00:08:26.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.579 "is_configured": false, 00:08:26.579 "data_offset": 0, 00:08:26.579 "data_size": 0 00:08:26.579 } 00:08:26.579 ] 00:08:26.579 }' 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.579 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.175 [2024-11-15 11:20:09.855103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.175 [2024-11-15 11:20:09.855365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.175 [2024-11-15 11:20:09.867116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.175 [2024-11-15 11:20:09.869772] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.175 [2024-11-15 11:20:09.869821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.175 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.175 "name": "Existed_Raid", 00:08:27.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.175 "strip_size_kb": 64, 00:08:27.175 "state": "configuring", 00:08:27.175 "raid_level": "concat", 00:08:27.175 "superblock": false, 00:08:27.176 "num_base_bdevs": 2, 00:08:27.176 "num_base_bdevs_discovered": 1, 00:08:27.176 "num_base_bdevs_operational": 2, 00:08:27.176 "base_bdevs_list": [ 00:08:27.176 { 00:08:27.176 "name": "BaseBdev1", 00:08:27.176 "uuid": "901346ad-8c3a-40b4-8865-e99558d865e2", 00:08:27.176 "is_configured": true, 00:08:27.176 "data_offset": 0, 00:08:27.176 "data_size": 65536 00:08:27.176 }, 00:08:27.176 { 00:08:27.176 "name": "BaseBdev2", 00:08:27.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.176 "is_configured": false, 00:08:27.176 "data_offset": 0, 00:08:27.176 "data_size": 0 00:08:27.176 } 00:08:27.176 ] 00:08:27.176 }' 00:08:27.176 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.176 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.742 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.742 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.742 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.742 [2024-11-15 11:20:10.432775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.742 [2024-11-15 11:20:10.432839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:27.743 [2024-11-15 11:20:10.432852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:27.743 [2024-11-15 11:20:10.433292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:27.743 [2024-11-15 11:20:10.433517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:27.743 [2024-11-15 11:20:10.433539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:27.743 [2024-11-15 11:20:10.433895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.743 BaseBdev2 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.743 [ 00:08:27.743 { 00:08:27.743 "name": "BaseBdev2", 00:08:27.743 "aliases": [ 00:08:27.743 "aa0c9666-541e-4bd2-a41b-3d6d1259cc32" 00:08:27.743 ], 00:08:27.743 "product_name": "Malloc disk", 00:08:27.743 "block_size": 512, 00:08:27.743 "num_blocks": 65536, 00:08:27.743 "uuid": "aa0c9666-541e-4bd2-a41b-3d6d1259cc32", 00:08:27.743 "assigned_rate_limits": { 00:08:27.743 "rw_ios_per_sec": 0, 00:08:27.743 "rw_mbytes_per_sec": 0, 00:08:27.743 "r_mbytes_per_sec": 0, 00:08:27.743 "w_mbytes_per_sec": 0 00:08:27.743 }, 00:08:27.743 "claimed": true, 00:08:27.743 "claim_type": "exclusive_write", 00:08:27.743 "zoned": false, 00:08:27.743 "supported_io_types": { 00:08:27.743 "read": true, 00:08:27.743 "write": true, 00:08:27.743 "unmap": true, 00:08:27.743 "flush": true, 00:08:27.743 "reset": true, 00:08:27.743 "nvme_admin": false, 00:08:27.743 "nvme_io": false, 00:08:27.743 "nvme_io_md": false, 00:08:27.743 "write_zeroes": true, 00:08:27.743 "zcopy": true, 00:08:27.743 "get_zone_info": false, 00:08:27.743 "zone_management": false, 00:08:27.743 "zone_append": false, 00:08:27.743 "compare": false, 00:08:27.743 "compare_and_write": false, 00:08:27.743 "abort": true, 00:08:27.743 "seek_hole": false, 00:08:27.743 "seek_data": false, 00:08:27.743 "copy": true, 00:08:27.743 "nvme_iov_md": false 00:08:27.743 }, 00:08:27.743 "memory_domains": [ 00:08:27.743 { 00:08:27.743 "dma_device_id": "system", 00:08:27.743 "dma_device_type": 1 00:08:27.743 }, 00:08:27.743 { 00:08:27.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.743 "dma_device_type": 2 00:08:27.743 } 00:08:27.743 ], 00:08:27.743 "driver_specific": {} 00:08:27.743 } 00:08:27.743 ] 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.743 "name": "Existed_Raid", 00:08:27.743 "uuid": "43c14752-d966-4ca3-8cd2-62b4968c0f28", 00:08:27.743 "strip_size_kb": 64, 00:08:27.743 "state": "online", 00:08:27.743 "raid_level": "concat", 00:08:27.743 "superblock": false, 00:08:27.743 "num_base_bdevs": 2, 00:08:27.743 "num_base_bdevs_discovered": 2, 00:08:27.743 "num_base_bdevs_operational": 2, 00:08:27.743 "base_bdevs_list": [ 00:08:27.743 { 00:08:27.743 "name": "BaseBdev1", 00:08:27.743 "uuid": "901346ad-8c3a-40b4-8865-e99558d865e2", 00:08:27.743 "is_configured": true, 00:08:27.743 "data_offset": 0, 00:08:27.743 "data_size": 65536 00:08:27.743 }, 00:08:27.743 { 00:08:27.743 "name": "BaseBdev2", 00:08:27.743 "uuid": "aa0c9666-541e-4bd2-a41b-3d6d1259cc32", 00:08:27.743 "is_configured": true, 00:08:27.743 "data_offset": 0, 00:08:27.743 "data_size": 65536 00:08:27.743 } 00:08:27.743 ] 00:08:27.743 }' 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.743 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.310 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.310 [2024-11-15 11:20:10.989411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.310 "name": "Existed_Raid", 00:08:28.310 "aliases": [ 00:08:28.310 "43c14752-d966-4ca3-8cd2-62b4968c0f28" 00:08:28.310 ], 00:08:28.310 "product_name": "Raid Volume", 00:08:28.310 "block_size": 512, 00:08:28.310 "num_blocks": 131072, 00:08:28.310 "uuid": "43c14752-d966-4ca3-8cd2-62b4968c0f28", 00:08:28.310 "assigned_rate_limits": { 00:08:28.310 "rw_ios_per_sec": 0, 00:08:28.310 "rw_mbytes_per_sec": 0, 00:08:28.310 "r_mbytes_per_sec": 0, 00:08:28.310 "w_mbytes_per_sec": 0 00:08:28.310 }, 00:08:28.310 "claimed": false, 00:08:28.310 "zoned": false, 00:08:28.310 "supported_io_types": { 00:08:28.310 "read": true, 00:08:28.310 "write": true, 00:08:28.310 "unmap": true, 00:08:28.310 "flush": true, 00:08:28.310 "reset": true, 00:08:28.310 "nvme_admin": false, 00:08:28.310 "nvme_io": false, 00:08:28.310 "nvme_io_md": false, 00:08:28.310 "write_zeroes": true, 00:08:28.310 "zcopy": false, 00:08:28.310 "get_zone_info": false, 00:08:28.310 "zone_management": false, 00:08:28.310 "zone_append": false, 00:08:28.310 "compare": false, 00:08:28.310 "compare_and_write": false, 00:08:28.310 "abort": false, 00:08:28.310 "seek_hole": false, 00:08:28.310 "seek_data": false, 00:08:28.310 "copy": false, 00:08:28.310 "nvme_iov_md": false 00:08:28.310 }, 00:08:28.310 "memory_domains": [ 00:08:28.310 { 00:08:28.310 "dma_device_id": "system", 00:08:28.310 "dma_device_type": 1 00:08:28.310 }, 00:08:28.310 { 00:08:28.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.310 "dma_device_type": 2 00:08:28.310 }, 00:08:28.310 { 00:08:28.310 "dma_device_id": "system", 00:08:28.310 "dma_device_type": 1 00:08:28.310 }, 00:08:28.310 { 00:08:28.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.310 "dma_device_type": 2 00:08:28.310 } 00:08:28.310 ], 00:08:28.310 "driver_specific": { 00:08:28.310 "raid": { 00:08:28.310 "uuid": "43c14752-d966-4ca3-8cd2-62b4968c0f28", 00:08:28.310 "strip_size_kb": 64, 00:08:28.310 "state": "online", 00:08:28.310 "raid_level": "concat", 00:08:28.310 "superblock": false, 00:08:28.310 "num_base_bdevs": 2, 00:08:28.310 "num_base_bdevs_discovered": 2, 00:08:28.310 "num_base_bdevs_operational": 2, 00:08:28.310 "base_bdevs_list": [ 00:08:28.310 { 00:08:28.310 "name": "BaseBdev1", 00:08:28.310 "uuid": "901346ad-8c3a-40b4-8865-e99558d865e2", 00:08:28.310 "is_configured": true, 00:08:28.310 "data_offset": 0, 00:08:28.310 "data_size": 65536 00:08:28.310 }, 00:08:28.310 { 00:08:28.310 "name": "BaseBdev2", 00:08:28.310 "uuid": "aa0c9666-541e-4bd2-a41b-3d6d1259cc32", 00:08:28.310 "is_configured": true, 00:08:28.310 "data_offset": 0, 00:08:28.310 "data_size": 65536 00:08:28.310 } 00:08:28.310 ] 00:08:28.310 } 00:08:28.310 } 00:08:28.310 }' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:28.310 BaseBdev2' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.310 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.310 [2024-11-15 11:20:11.253151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.310 [2024-11-15 11:20:11.253197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.310 [2024-11-15 11:20:11.253295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.568 "name": "Existed_Raid", 00:08:28.568 "uuid": "43c14752-d966-4ca3-8cd2-62b4968c0f28", 00:08:28.568 "strip_size_kb": 64, 00:08:28.568 "state": "offline", 00:08:28.568 "raid_level": "concat", 00:08:28.568 "superblock": false, 00:08:28.568 "num_base_bdevs": 2, 00:08:28.568 "num_base_bdevs_discovered": 1, 00:08:28.568 "num_base_bdevs_operational": 1, 00:08:28.568 "base_bdevs_list": [ 00:08:28.568 { 00:08:28.568 "name": null, 00:08:28.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.568 "is_configured": false, 00:08:28.568 "data_offset": 0, 00:08:28.568 "data_size": 65536 00:08:28.568 }, 00:08:28.568 { 00:08:28.568 "name": "BaseBdev2", 00:08:28.568 "uuid": "aa0c9666-541e-4bd2-a41b-3d6d1259cc32", 00:08:28.568 "is_configured": true, 00:08:28.568 "data_offset": 0, 00:08:28.568 "data_size": 65536 00:08:28.568 } 00:08:28.568 ] 00:08:28.568 }' 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.568 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.135 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:29.135 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.136 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.136 [2024-11-15 11:20:11.954245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.136 [2024-11-15 11:20:11.954322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:29.136 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.136 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:29.136 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.136 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.136 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:29.136 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.136 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.136 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61516 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61516 ']' 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61516 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61516 00:08:29.394 killing process with pid 61516 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61516' 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61516 00:08:29.394 [2024-11-15 11:20:12.134212] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.394 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61516 00:08:29.394 [2024-11-15 11:20:12.150130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.770 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:30.771 00:08:30.771 real 0m5.704s 00:08:30.771 user 0m8.494s 00:08:30.771 sys 0m0.872s 00:08:30.771 ************************************ 00:08:30.771 END TEST raid_state_function_test 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.771 ************************************ 00:08:30.771 11:20:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:30.771 11:20:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:30.771 11:20:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.771 11:20:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.771 ************************************ 00:08:30.771 START TEST raid_state_function_test_sb 00:08:30.771 ************************************ 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61774 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61774' 00:08:30.771 Process raid pid: 61774 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61774 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61774 ']' 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:30.771 11:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.771 [2024-11-15 11:20:13.465291] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:30.771 [2024-11-15 11:20:13.465722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.771 [2024-11-15 11:20:13.646908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.029 [2024-11-15 11:20:13.789250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.288 [2024-11-15 11:20:14.000385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.288 [2024-11-15 11:20:14.000437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.547 [2024-11-15 11:20:14.397545] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.547 [2024-11-15 11:20:14.397624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.547 [2024-11-15 11:20:14.397640] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.547 [2024-11-15 11:20:14.397654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.547 "name": "Existed_Raid", 00:08:31.547 "uuid": "0e76a793-4fde-4b05-b467-c3bb423f408d", 00:08:31.547 "strip_size_kb": 64, 00:08:31.547 "state": "configuring", 00:08:31.547 "raid_level": "concat", 00:08:31.547 "superblock": true, 00:08:31.547 "num_base_bdevs": 2, 00:08:31.547 "num_base_bdevs_discovered": 0, 00:08:31.547 "num_base_bdevs_operational": 2, 00:08:31.547 "base_bdevs_list": [ 00:08:31.547 { 00:08:31.547 "name": "BaseBdev1", 00:08:31.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.547 "is_configured": false, 00:08:31.547 "data_offset": 0, 00:08:31.547 "data_size": 0 00:08:31.547 }, 00:08:31.547 { 00:08:31.547 "name": "BaseBdev2", 00:08:31.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.547 "is_configured": false, 00:08:31.547 "data_offset": 0, 00:08:31.547 "data_size": 0 00:08:31.547 } 00:08:31.547 ] 00:08:31.547 }' 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.547 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.114 [2024-11-15 11:20:14.917654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.114 [2024-11-15 11:20:14.917698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.114 [2024-11-15 11:20:14.925608] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.114 [2024-11-15 11:20:14.925673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.114 [2024-11-15 11:20:14.925687] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.114 [2024-11-15 11:20:14.925705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.114 [2024-11-15 11:20:14.970360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.114 BaseBdev1 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.114 11:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.115 11:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.115 [ 00:08:32.115 { 00:08:32.115 "name": "BaseBdev1", 00:08:32.115 "aliases": [ 00:08:32.115 "a45eed3f-5284-42fc-8b25-b82fa9322e83" 00:08:32.115 ], 00:08:32.115 "product_name": "Malloc disk", 00:08:32.115 "block_size": 512, 00:08:32.115 "num_blocks": 65536, 00:08:32.115 "uuid": "a45eed3f-5284-42fc-8b25-b82fa9322e83", 00:08:32.115 "assigned_rate_limits": { 00:08:32.115 "rw_ios_per_sec": 0, 00:08:32.115 "rw_mbytes_per_sec": 0, 00:08:32.115 "r_mbytes_per_sec": 0, 00:08:32.115 "w_mbytes_per_sec": 0 00:08:32.115 }, 00:08:32.115 "claimed": true, 00:08:32.115 "claim_type": "exclusive_write", 00:08:32.115 "zoned": false, 00:08:32.115 "supported_io_types": { 00:08:32.115 "read": true, 00:08:32.115 "write": true, 00:08:32.115 "unmap": true, 00:08:32.115 "flush": true, 00:08:32.115 "reset": true, 00:08:32.115 "nvme_admin": false, 00:08:32.115 "nvme_io": false, 00:08:32.115 "nvme_io_md": false, 00:08:32.115 "write_zeroes": true, 00:08:32.115 "zcopy": true, 00:08:32.115 "get_zone_info": false, 00:08:32.115 "zone_management": false, 00:08:32.115 "zone_append": false, 00:08:32.115 "compare": false, 00:08:32.115 "compare_and_write": false, 00:08:32.115 "abort": true, 00:08:32.115 "seek_hole": false, 00:08:32.115 "seek_data": false, 00:08:32.115 "copy": true, 00:08:32.115 "nvme_iov_md": false 00:08:32.115 }, 00:08:32.115 "memory_domains": [ 00:08:32.115 { 00:08:32.115 "dma_device_id": "system", 00:08:32.115 "dma_device_type": 1 00:08:32.115 }, 00:08:32.115 { 00:08:32.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.115 "dma_device_type": 2 00:08:32.115 } 00:08:32.115 ], 00:08:32.115 "driver_specific": {} 00:08:32.115 } 00:08:32.115 ] 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.115 "name": "Existed_Raid", 00:08:32.115 "uuid": "0c9bd0fa-9ca1-4cd0-882e-acce75794eab", 00:08:32.115 "strip_size_kb": 64, 00:08:32.115 "state": "configuring", 00:08:32.115 "raid_level": "concat", 00:08:32.115 "superblock": true, 00:08:32.115 "num_base_bdevs": 2, 00:08:32.115 "num_base_bdevs_discovered": 1, 00:08:32.115 "num_base_bdevs_operational": 2, 00:08:32.115 "base_bdevs_list": [ 00:08:32.115 { 00:08:32.115 "name": "BaseBdev1", 00:08:32.115 "uuid": "a45eed3f-5284-42fc-8b25-b82fa9322e83", 00:08:32.115 "is_configured": true, 00:08:32.115 "data_offset": 2048, 00:08:32.115 "data_size": 63488 00:08:32.115 }, 00:08:32.115 { 00:08:32.115 "name": "BaseBdev2", 00:08:32.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.115 "is_configured": false, 00:08:32.115 "data_offset": 0, 00:08:32.115 "data_size": 0 00:08:32.115 } 00:08:32.115 ] 00:08:32.115 }' 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.115 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.683 [2024-11-15 11:20:15.530635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.683 [2024-11-15 11:20:15.530901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.683 [2024-11-15 11:20:15.538658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.683 [2024-11-15 11:20:15.541146] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.683 [2024-11-15 11:20:15.541396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.683 "name": "Existed_Raid", 00:08:32.683 "uuid": "92cec91b-e6d8-4a27-b3de-2761f723fb6d", 00:08:32.683 "strip_size_kb": 64, 00:08:32.683 "state": "configuring", 00:08:32.683 "raid_level": "concat", 00:08:32.683 "superblock": true, 00:08:32.683 "num_base_bdevs": 2, 00:08:32.683 "num_base_bdevs_discovered": 1, 00:08:32.683 "num_base_bdevs_operational": 2, 00:08:32.683 "base_bdevs_list": [ 00:08:32.683 { 00:08:32.683 "name": "BaseBdev1", 00:08:32.683 "uuid": "a45eed3f-5284-42fc-8b25-b82fa9322e83", 00:08:32.683 "is_configured": true, 00:08:32.683 "data_offset": 2048, 00:08:32.683 "data_size": 63488 00:08:32.683 }, 00:08:32.683 { 00:08:32.683 "name": "BaseBdev2", 00:08:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.683 "is_configured": false, 00:08:32.683 "data_offset": 0, 00:08:32.683 "data_size": 0 00:08:32.683 } 00:08:32.683 ] 00:08:32.683 }' 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.683 11:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 [2024-11-15 11:20:16.107457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.252 [2024-11-15 11:20:16.107771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.252 [2024-11-15 11:20:16.107790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:33.252 BaseBdev2 00:08:33.252 [2024-11-15 11:20:16.108113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:33.252 [2024-11-15 11:20:16.108337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.252 [2024-11-15 11:20:16.108359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:33.252 [2024-11-15 11:20:16.108528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.252 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 [ 00:08:33.252 { 00:08:33.252 "name": "BaseBdev2", 00:08:33.252 "aliases": [ 00:08:33.252 "7e49db22-7375-404e-9d7f-eb44ccdee623" 00:08:33.252 ], 00:08:33.252 "product_name": "Malloc disk", 00:08:33.252 "block_size": 512, 00:08:33.252 "num_blocks": 65536, 00:08:33.252 "uuid": "7e49db22-7375-404e-9d7f-eb44ccdee623", 00:08:33.252 "assigned_rate_limits": { 00:08:33.252 "rw_ios_per_sec": 0, 00:08:33.252 "rw_mbytes_per_sec": 0, 00:08:33.252 "r_mbytes_per_sec": 0, 00:08:33.252 "w_mbytes_per_sec": 0 00:08:33.252 }, 00:08:33.252 "claimed": true, 00:08:33.252 "claim_type": "exclusive_write", 00:08:33.252 "zoned": false, 00:08:33.252 "supported_io_types": { 00:08:33.252 "read": true, 00:08:33.252 "write": true, 00:08:33.252 "unmap": true, 00:08:33.252 "flush": true, 00:08:33.252 "reset": true, 00:08:33.252 "nvme_admin": false, 00:08:33.252 "nvme_io": false, 00:08:33.252 "nvme_io_md": false, 00:08:33.252 "write_zeroes": true, 00:08:33.252 "zcopy": true, 00:08:33.252 "get_zone_info": false, 00:08:33.253 "zone_management": false, 00:08:33.253 "zone_append": false, 00:08:33.253 "compare": false, 00:08:33.253 "compare_and_write": false, 00:08:33.253 "abort": true, 00:08:33.253 "seek_hole": false, 00:08:33.253 "seek_data": false, 00:08:33.253 "copy": true, 00:08:33.253 "nvme_iov_md": false 00:08:33.253 }, 00:08:33.253 "memory_domains": [ 00:08:33.253 { 00:08:33.253 "dma_device_id": "system", 00:08:33.253 "dma_device_type": 1 00:08:33.253 }, 00:08:33.253 { 00:08:33.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.253 "dma_device_type": 2 00:08:33.253 } 00:08:33.253 ], 00:08:33.253 "driver_specific": {} 00:08:33.253 } 00:08:33.253 ] 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.253 "name": "Existed_Raid", 00:08:33.253 "uuid": "92cec91b-e6d8-4a27-b3de-2761f723fb6d", 00:08:33.253 "strip_size_kb": 64, 00:08:33.253 "state": "online", 00:08:33.253 "raid_level": "concat", 00:08:33.253 "superblock": true, 00:08:33.253 "num_base_bdevs": 2, 00:08:33.253 "num_base_bdevs_discovered": 2, 00:08:33.253 "num_base_bdevs_operational": 2, 00:08:33.253 "base_bdevs_list": [ 00:08:33.253 { 00:08:33.253 "name": "BaseBdev1", 00:08:33.253 "uuid": "a45eed3f-5284-42fc-8b25-b82fa9322e83", 00:08:33.253 "is_configured": true, 00:08:33.253 "data_offset": 2048, 00:08:33.253 "data_size": 63488 00:08:33.253 }, 00:08:33.253 { 00:08:33.253 "name": "BaseBdev2", 00:08:33.253 "uuid": "7e49db22-7375-404e-9d7f-eb44ccdee623", 00:08:33.253 "is_configured": true, 00:08:33.253 "data_offset": 2048, 00:08:33.253 "data_size": 63488 00:08:33.253 } 00:08:33.253 ] 00:08:33.253 }' 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.253 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.820 [2024-11-15 11:20:16.668022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.820 "name": "Existed_Raid", 00:08:33.820 "aliases": [ 00:08:33.820 "92cec91b-e6d8-4a27-b3de-2761f723fb6d" 00:08:33.820 ], 00:08:33.820 "product_name": "Raid Volume", 00:08:33.820 "block_size": 512, 00:08:33.820 "num_blocks": 126976, 00:08:33.820 "uuid": "92cec91b-e6d8-4a27-b3de-2761f723fb6d", 00:08:33.820 "assigned_rate_limits": { 00:08:33.820 "rw_ios_per_sec": 0, 00:08:33.820 "rw_mbytes_per_sec": 0, 00:08:33.820 "r_mbytes_per_sec": 0, 00:08:33.820 "w_mbytes_per_sec": 0 00:08:33.820 }, 00:08:33.820 "claimed": false, 00:08:33.820 "zoned": false, 00:08:33.820 "supported_io_types": { 00:08:33.820 "read": true, 00:08:33.820 "write": true, 00:08:33.820 "unmap": true, 00:08:33.820 "flush": true, 00:08:33.820 "reset": true, 00:08:33.820 "nvme_admin": false, 00:08:33.820 "nvme_io": false, 00:08:33.820 "nvme_io_md": false, 00:08:33.820 "write_zeroes": true, 00:08:33.820 "zcopy": false, 00:08:33.820 "get_zone_info": false, 00:08:33.820 "zone_management": false, 00:08:33.820 "zone_append": false, 00:08:33.820 "compare": false, 00:08:33.820 "compare_and_write": false, 00:08:33.820 "abort": false, 00:08:33.820 "seek_hole": false, 00:08:33.820 "seek_data": false, 00:08:33.820 "copy": false, 00:08:33.820 "nvme_iov_md": false 00:08:33.820 }, 00:08:33.820 "memory_domains": [ 00:08:33.820 { 00:08:33.820 "dma_device_id": "system", 00:08:33.820 "dma_device_type": 1 00:08:33.820 }, 00:08:33.820 { 00:08:33.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.820 "dma_device_type": 2 00:08:33.820 }, 00:08:33.820 { 00:08:33.820 "dma_device_id": "system", 00:08:33.820 "dma_device_type": 1 00:08:33.820 }, 00:08:33.820 { 00:08:33.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.820 "dma_device_type": 2 00:08:33.820 } 00:08:33.820 ], 00:08:33.820 "driver_specific": { 00:08:33.820 "raid": { 00:08:33.820 "uuid": "92cec91b-e6d8-4a27-b3de-2761f723fb6d", 00:08:33.820 "strip_size_kb": 64, 00:08:33.820 "state": "online", 00:08:33.820 "raid_level": "concat", 00:08:33.820 "superblock": true, 00:08:33.820 "num_base_bdevs": 2, 00:08:33.820 "num_base_bdevs_discovered": 2, 00:08:33.820 "num_base_bdevs_operational": 2, 00:08:33.820 "base_bdevs_list": [ 00:08:33.820 { 00:08:33.820 "name": "BaseBdev1", 00:08:33.820 "uuid": "a45eed3f-5284-42fc-8b25-b82fa9322e83", 00:08:33.820 "is_configured": true, 00:08:33.820 "data_offset": 2048, 00:08:33.820 "data_size": 63488 00:08:33.820 }, 00:08:33.820 { 00:08:33.820 "name": "BaseBdev2", 00:08:33.820 "uuid": "7e49db22-7375-404e-9d7f-eb44ccdee623", 00:08:33.820 "is_configured": true, 00:08:33.820 "data_offset": 2048, 00:08:33.820 "data_size": 63488 00:08:33.820 } 00:08:33.820 ] 00:08:33.820 } 00:08:33.820 } 00:08:33.820 }' 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.820 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:33.820 BaseBdev2' 00:08:33.821 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.079 11:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.079 [2024-11-15 11:20:16.915802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.079 [2024-11-15 11:20:16.915859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.079 [2024-11-15 11:20:16.915946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.079 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.080 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.080 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.080 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.080 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.080 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.080 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.338 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.338 "name": "Existed_Raid", 00:08:34.338 "uuid": "92cec91b-e6d8-4a27-b3de-2761f723fb6d", 00:08:34.338 "strip_size_kb": 64, 00:08:34.338 "state": "offline", 00:08:34.338 "raid_level": "concat", 00:08:34.338 "superblock": true, 00:08:34.338 "num_base_bdevs": 2, 00:08:34.338 "num_base_bdevs_discovered": 1, 00:08:34.338 "num_base_bdevs_operational": 1, 00:08:34.338 "base_bdevs_list": [ 00:08:34.338 { 00:08:34.339 "name": null, 00:08:34.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.339 "is_configured": false, 00:08:34.339 "data_offset": 0, 00:08:34.339 "data_size": 63488 00:08:34.339 }, 00:08:34.339 { 00:08:34.339 "name": "BaseBdev2", 00:08:34.339 "uuid": "7e49db22-7375-404e-9d7f-eb44ccdee623", 00:08:34.339 "is_configured": true, 00:08:34.339 "data_offset": 2048, 00:08:34.339 "data_size": 63488 00:08:34.339 } 00:08:34.339 ] 00:08:34.339 }' 00:08:34.339 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.339 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.598 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:34.598 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.598 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.598 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.598 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.598 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.598 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.859 [2024-11-15 11:20:17.567919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.859 [2024-11-15 11:20:17.567989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61774 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61774 ']' 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61774 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61774 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:34.859 killing process with pid 61774 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61774' 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61774 00:08:34.859 [2024-11-15 11:20:17.740624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.859 11:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61774 00:08:34.859 [2024-11-15 11:20:17.755751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.239 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:36.239 ************************************ 00:08:36.239 END TEST raid_state_function_test_sb 00:08:36.239 ************************************ 00:08:36.239 00:08:36.239 real 0m5.482s 00:08:36.239 user 0m8.204s 00:08:36.239 sys 0m0.823s 00:08:36.239 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.239 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.239 11:20:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:36.239 11:20:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:36.239 11:20:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.239 11:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.239 ************************************ 00:08:36.239 START TEST raid_superblock_test 00:08:36.239 ************************************ 00:08:36.239 11:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:08:36.239 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:36.239 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:36.239 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:36.239 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:36.239 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:36.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62032 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62032 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62032 ']' 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.240 11:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.240 [2024-11-15 11:20:18.982688] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:36.240 [2024-11-15 11:20:18.982851] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62032 ] 00:08:36.240 [2024-11-15 11:20:19.163087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.498 [2024-11-15 11:20:19.342578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.757 [2024-11-15 11:20:19.561738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.757 [2024-11-15 11:20:19.561792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 malloc1 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 [2024-11-15 11:20:20.087017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:37.326 [2024-11-15 11:20:20.087285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.326 [2024-11-15 11:20:20.087363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:37.326 [2024-11-15 11:20:20.087668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.326 [2024-11-15 11:20:20.090697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.326 [2024-11-15 11:20:20.090894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:37.326 pt1 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 malloc2 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 [2024-11-15 11:20:20.147409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:37.326 [2024-11-15 11:20:20.147674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.326 [2024-11-15 11:20:20.147756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:37.326 [2024-11-15 11:20:20.148037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.326 [2024-11-15 11:20:20.151037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.326 pt2 00:08:37.326 [2024-11-15 11:20:20.151238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 [2024-11-15 11:20:20.155494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:37.326 [2024-11-15 11:20:20.157980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.326 [2024-11-15 11:20:20.158223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:37.326 [2024-11-15 11:20:20.158258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:37.326 [2024-11-15 11:20:20.158628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:37.326 [2024-11-15 11:20:20.158824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:37.326 [2024-11-15 11:20:20.158843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:37.326 [2024-11-15 11:20:20.159032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.326 "name": "raid_bdev1", 00:08:37.326 "uuid": "c4ec6204-ad26-4b16-b198-9dc2f5015361", 00:08:37.326 "strip_size_kb": 64, 00:08:37.326 "state": "online", 00:08:37.326 "raid_level": "concat", 00:08:37.326 "superblock": true, 00:08:37.326 "num_base_bdevs": 2, 00:08:37.326 "num_base_bdevs_discovered": 2, 00:08:37.326 "num_base_bdevs_operational": 2, 00:08:37.326 "base_bdevs_list": [ 00:08:37.326 { 00:08:37.326 "name": "pt1", 00:08:37.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.326 "is_configured": true, 00:08:37.326 "data_offset": 2048, 00:08:37.326 "data_size": 63488 00:08:37.326 }, 00:08:37.326 { 00:08:37.326 "name": "pt2", 00:08:37.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.326 "is_configured": true, 00:08:37.326 "data_offset": 2048, 00:08:37.326 "data_size": 63488 00:08:37.326 } 00:08:37.326 ] 00:08:37.326 }' 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.326 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.893 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.894 [2024-11-15 11:20:20.667981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.894 "name": "raid_bdev1", 00:08:37.894 "aliases": [ 00:08:37.894 "c4ec6204-ad26-4b16-b198-9dc2f5015361" 00:08:37.894 ], 00:08:37.894 "product_name": "Raid Volume", 00:08:37.894 "block_size": 512, 00:08:37.894 "num_blocks": 126976, 00:08:37.894 "uuid": "c4ec6204-ad26-4b16-b198-9dc2f5015361", 00:08:37.894 "assigned_rate_limits": { 00:08:37.894 "rw_ios_per_sec": 0, 00:08:37.894 "rw_mbytes_per_sec": 0, 00:08:37.894 "r_mbytes_per_sec": 0, 00:08:37.894 "w_mbytes_per_sec": 0 00:08:37.894 }, 00:08:37.894 "claimed": false, 00:08:37.894 "zoned": false, 00:08:37.894 "supported_io_types": { 00:08:37.894 "read": true, 00:08:37.894 "write": true, 00:08:37.894 "unmap": true, 00:08:37.894 "flush": true, 00:08:37.894 "reset": true, 00:08:37.894 "nvme_admin": false, 00:08:37.894 "nvme_io": false, 00:08:37.894 "nvme_io_md": false, 00:08:37.894 "write_zeroes": true, 00:08:37.894 "zcopy": false, 00:08:37.894 "get_zone_info": false, 00:08:37.894 "zone_management": false, 00:08:37.894 "zone_append": false, 00:08:37.894 "compare": false, 00:08:37.894 "compare_and_write": false, 00:08:37.894 "abort": false, 00:08:37.894 "seek_hole": false, 00:08:37.894 "seek_data": false, 00:08:37.894 "copy": false, 00:08:37.894 "nvme_iov_md": false 00:08:37.894 }, 00:08:37.894 "memory_domains": [ 00:08:37.894 { 00:08:37.894 "dma_device_id": "system", 00:08:37.894 "dma_device_type": 1 00:08:37.894 }, 00:08:37.894 { 00:08:37.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.894 "dma_device_type": 2 00:08:37.894 }, 00:08:37.894 { 00:08:37.894 "dma_device_id": "system", 00:08:37.894 "dma_device_type": 1 00:08:37.894 }, 00:08:37.894 { 00:08:37.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.894 "dma_device_type": 2 00:08:37.894 } 00:08:37.894 ], 00:08:37.894 "driver_specific": { 00:08:37.894 "raid": { 00:08:37.894 "uuid": "c4ec6204-ad26-4b16-b198-9dc2f5015361", 00:08:37.894 "strip_size_kb": 64, 00:08:37.894 "state": "online", 00:08:37.894 "raid_level": "concat", 00:08:37.894 "superblock": true, 00:08:37.894 "num_base_bdevs": 2, 00:08:37.894 "num_base_bdevs_discovered": 2, 00:08:37.894 "num_base_bdevs_operational": 2, 00:08:37.894 "base_bdevs_list": [ 00:08:37.894 { 00:08:37.894 "name": "pt1", 00:08:37.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.894 "is_configured": true, 00:08:37.894 "data_offset": 2048, 00:08:37.894 "data_size": 63488 00:08:37.894 }, 00:08:37.894 { 00:08:37.894 "name": "pt2", 00:08:37.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.894 "is_configured": true, 00:08:37.894 "data_offset": 2048, 00:08:37.894 "data_size": 63488 00:08:37.894 } 00:08:37.894 ] 00:08:37.894 } 00:08:37.894 } 00:08:37.894 }' 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:37.894 pt2' 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.894 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 [2024-11-15 11:20:20.944007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c4ec6204-ad26-4b16-b198-9dc2f5015361 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c4ec6204-ad26-4b16-b198-9dc2f5015361 ']' 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 [2024-11-15 11:20:20.991622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.152 [2024-11-15 11:20:20.991646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.152 [2024-11-15 11:20:20.991734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.152 [2024-11-15 11:20:20.991798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.152 [2024-11-15 11:20:20.991816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:38.152 11:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.410 [2024-11-15 11:20:21.131729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:38.410 [2024-11-15 11:20:21.134332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:38.410 [2024-11-15 11:20:21.134463] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:38.410 [2024-11-15 11:20:21.134602] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:38.410 [2024-11-15 11:20:21.134874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.410 [2024-11-15 11:20:21.134899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:38.410 request: 00:08:38.410 { 00:08:38.410 "name": "raid_bdev1", 00:08:38.410 "raid_level": "concat", 00:08:38.410 "base_bdevs": [ 00:08:38.410 "malloc1", 00:08:38.410 "malloc2" 00:08:38.410 ], 00:08:38.410 "strip_size_kb": 64, 00:08:38.410 "superblock": false, 00:08:38.410 "method": "bdev_raid_create", 00:08:38.410 "req_id": 1 00:08:38.410 } 00:08:38.410 Got JSON-RPC error response 00:08:38.410 response: 00:08:38.410 { 00:08:38.410 "code": -17, 00:08:38.410 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:38.410 } 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.410 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.410 [2024-11-15 11:20:21.195790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:38.410 [2024-11-15 11:20:21.195860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.410 [2024-11-15 11:20:21.195883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:38.410 [2024-11-15 11:20:21.195898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.410 [2024-11-15 11:20:21.199058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.410 [2024-11-15 11:20:21.199259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:38.411 [2024-11-15 11:20:21.199366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:38.411 [2024-11-15 11:20:21.199441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:38.411 pt1 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.411 "name": "raid_bdev1", 00:08:38.411 "uuid": "c4ec6204-ad26-4b16-b198-9dc2f5015361", 00:08:38.411 "strip_size_kb": 64, 00:08:38.411 "state": "configuring", 00:08:38.411 "raid_level": "concat", 00:08:38.411 "superblock": true, 00:08:38.411 "num_base_bdevs": 2, 00:08:38.411 "num_base_bdevs_discovered": 1, 00:08:38.411 "num_base_bdevs_operational": 2, 00:08:38.411 "base_bdevs_list": [ 00:08:38.411 { 00:08:38.411 "name": "pt1", 00:08:38.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.411 "is_configured": true, 00:08:38.411 "data_offset": 2048, 00:08:38.411 "data_size": 63488 00:08:38.411 }, 00:08:38.411 { 00:08:38.411 "name": null, 00:08:38.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.411 "is_configured": false, 00:08:38.411 "data_offset": 2048, 00:08:38.411 "data_size": 63488 00:08:38.411 } 00:08:38.411 ] 00:08:38.411 }' 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.411 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.978 [2024-11-15 11:20:21.720002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:38.978 [2024-11-15 11:20:21.720114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.978 [2024-11-15 11:20:21.720146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:38.978 [2024-11-15 11:20:21.720163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.978 [2024-11-15 11:20:21.720829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.978 [2024-11-15 11:20:21.720863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:38.978 [2024-11-15 11:20:21.721030] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:38.978 [2024-11-15 11:20:21.721078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:38.978 [2024-11-15 11:20:21.721276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.978 [2024-11-15 11:20:21.721298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:38.978 [2024-11-15 11:20:21.721640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:38.978 [2024-11-15 11:20:21.721848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.978 [2024-11-15 11:20:21.721861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:38.978 [2024-11-15 11:20:21.722067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.978 pt2 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.978 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.979 "name": "raid_bdev1", 00:08:38.979 "uuid": "c4ec6204-ad26-4b16-b198-9dc2f5015361", 00:08:38.979 "strip_size_kb": 64, 00:08:38.979 "state": "online", 00:08:38.979 "raid_level": "concat", 00:08:38.979 "superblock": true, 00:08:38.979 "num_base_bdevs": 2, 00:08:38.979 "num_base_bdevs_discovered": 2, 00:08:38.979 "num_base_bdevs_operational": 2, 00:08:38.979 "base_bdevs_list": [ 00:08:38.979 { 00:08:38.979 "name": "pt1", 00:08:38.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.979 "is_configured": true, 00:08:38.979 "data_offset": 2048, 00:08:38.979 "data_size": 63488 00:08:38.979 }, 00:08:38.979 { 00:08:38.979 "name": "pt2", 00:08:38.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.979 "is_configured": true, 00:08:38.979 "data_offset": 2048, 00:08:38.979 "data_size": 63488 00:08:38.979 } 00:08:38.979 ] 00:08:38.979 }' 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.979 11:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.545 [2024-11-15 11:20:22.272506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.545 "name": "raid_bdev1", 00:08:39.545 "aliases": [ 00:08:39.545 "c4ec6204-ad26-4b16-b198-9dc2f5015361" 00:08:39.545 ], 00:08:39.545 "product_name": "Raid Volume", 00:08:39.545 "block_size": 512, 00:08:39.545 "num_blocks": 126976, 00:08:39.545 "uuid": "c4ec6204-ad26-4b16-b198-9dc2f5015361", 00:08:39.545 "assigned_rate_limits": { 00:08:39.545 "rw_ios_per_sec": 0, 00:08:39.545 "rw_mbytes_per_sec": 0, 00:08:39.545 "r_mbytes_per_sec": 0, 00:08:39.545 "w_mbytes_per_sec": 0 00:08:39.545 }, 00:08:39.545 "claimed": false, 00:08:39.545 "zoned": false, 00:08:39.545 "supported_io_types": { 00:08:39.545 "read": true, 00:08:39.545 "write": true, 00:08:39.545 "unmap": true, 00:08:39.545 "flush": true, 00:08:39.545 "reset": true, 00:08:39.545 "nvme_admin": false, 00:08:39.545 "nvme_io": false, 00:08:39.545 "nvme_io_md": false, 00:08:39.545 "write_zeroes": true, 00:08:39.545 "zcopy": false, 00:08:39.545 "get_zone_info": false, 00:08:39.545 "zone_management": false, 00:08:39.545 "zone_append": false, 00:08:39.545 "compare": false, 00:08:39.545 "compare_and_write": false, 00:08:39.545 "abort": false, 00:08:39.545 "seek_hole": false, 00:08:39.545 "seek_data": false, 00:08:39.545 "copy": false, 00:08:39.545 "nvme_iov_md": false 00:08:39.545 }, 00:08:39.545 "memory_domains": [ 00:08:39.545 { 00:08:39.545 "dma_device_id": "system", 00:08:39.545 "dma_device_type": 1 00:08:39.545 }, 00:08:39.545 { 00:08:39.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.545 "dma_device_type": 2 00:08:39.545 }, 00:08:39.545 { 00:08:39.545 "dma_device_id": "system", 00:08:39.545 "dma_device_type": 1 00:08:39.545 }, 00:08:39.545 { 00:08:39.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.545 "dma_device_type": 2 00:08:39.545 } 00:08:39.545 ], 00:08:39.545 "driver_specific": { 00:08:39.545 "raid": { 00:08:39.545 "uuid": "c4ec6204-ad26-4b16-b198-9dc2f5015361", 00:08:39.545 "strip_size_kb": 64, 00:08:39.545 "state": "online", 00:08:39.545 "raid_level": "concat", 00:08:39.545 "superblock": true, 00:08:39.545 "num_base_bdevs": 2, 00:08:39.545 "num_base_bdevs_discovered": 2, 00:08:39.545 "num_base_bdevs_operational": 2, 00:08:39.545 "base_bdevs_list": [ 00:08:39.545 { 00:08:39.545 "name": "pt1", 00:08:39.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.545 "is_configured": true, 00:08:39.545 "data_offset": 2048, 00:08:39.545 "data_size": 63488 00:08:39.545 }, 00:08:39.545 { 00:08:39.545 "name": "pt2", 00:08:39.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.545 "is_configured": true, 00:08:39.545 "data_offset": 2048, 00:08:39.545 "data_size": 63488 00:08:39.545 } 00:08:39.545 ] 00:08:39.545 } 00:08:39.545 } 00:08:39.545 }' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:39.545 pt2' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.545 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.803 [2024-11-15 11:20:22.528462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c4ec6204-ad26-4b16-b198-9dc2f5015361 '!=' c4ec6204-ad26-4b16-b198-9dc2f5015361 ']' 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62032 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62032 ']' 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62032 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62032 00:08:39.803 killing process with pid 62032 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62032' 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62032 00:08:39.803 [2024-11-15 11:20:22.608013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.803 11:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62032 00:08:39.803 [2024-11-15 11:20:22.608163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.803 [2024-11-15 11:20:22.608280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.803 [2024-11-15 11:20:22.608316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:40.060 [2024-11-15 11:20:22.803587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.992 11:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:40.992 00:08:40.992 real 0m4.991s 00:08:40.992 user 0m7.295s 00:08:40.992 sys 0m0.783s 00:08:40.992 11:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.992 11:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.992 ************************************ 00:08:40.992 END TEST raid_superblock_test 00:08:40.992 ************************************ 00:08:40.992 11:20:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:40.992 11:20:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:40.992 11:20:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.992 11:20:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.992 ************************************ 00:08:40.992 START TEST raid_read_error_test 00:08:40.992 ************************************ 00:08:40.992 11:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:08:40.992 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:40.992 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:40.992 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:41.250 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:41.250 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.250 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:41.250 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.250 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lKI1xxF5jb 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62248 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62248 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62248 ']' 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.251 11:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.251 [2024-11-15 11:20:24.068537] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:41.251 [2024-11-15 11:20:24.069614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62248 ] 00:08:41.509 [2024-11-15 11:20:24.262546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.509 [2024-11-15 11:20:24.400240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.804 [2024-11-15 11:20:24.606523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.804 [2024-11-15 11:20:24.606601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 BaseBdev1_malloc 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 true 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 [2024-11-15 11:20:25.133614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:42.375 [2024-11-15 11:20:25.133687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.375 [2024-11-15 11:20:25.133725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:42.375 [2024-11-15 11:20:25.133750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.375 [2024-11-15 11:20:25.137407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.375 [2024-11-15 11:20:25.137456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:42.375 BaseBdev1 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 BaseBdev2_malloc 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 true 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 [2024-11-15 11:20:25.192454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:42.375 [2024-11-15 11:20:25.192697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.375 [2024-11-15 11:20:25.192743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:42.375 [2024-11-15 11:20:25.192771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.375 [2024-11-15 11:20:25.196027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.375 [2024-11-15 11:20:25.196079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:42.375 BaseBdev2 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 [2024-11-15 11:20:25.200526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.375 [2024-11-15 11:20:25.203088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.375 [2024-11-15 11:20:25.203523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:42.375 [2024-11-15 11:20:25.203566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:42.375 [2024-11-15 11:20:25.203874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:42.375 [2024-11-15 11:20:25.204079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:42.375 [2024-11-15 11:20:25.204099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:42.375 [2024-11-15 11:20:25.204466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.375 "name": "raid_bdev1", 00:08:42.375 "uuid": "59df2d92-d29c-49f1-a613-014985a57721", 00:08:42.375 "strip_size_kb": 64, 00:08:42.375 "state": "online", 00:08:42.375 "raid_level": "concat", 00:08:42.375 "superblock": true, 00:08:42.375 "num_base_bdevs": 2, 00:08:42.375 "num_base_bdevs_discovered": 2, 00:08:42.375 "num_base_bdevs_operational": 2, 00:08:42.375 "base_bdevs_list": [ 00:08:42.375 { 00:08:42.375 "name": "BaseBdev1", 00:08:42.375 "uuid": "aece2250-41f9-5526-b4d5-85aacd5559df", 00:08:42.375 "is_configured": true, 00:08:42.375 "data_offset": 2048, 00:08:42.375 "data_size": 63488 00:08:42.375 }, 00:08:42.375 { 00:08:42.375 "name": "BaseBdev2", 00:08:42.375 "uuid": "022ac20e-554f-5d9a-8b57-20582fb9c1df", 00:08:42.375 "is_configured": true, 00:08:42.375 "data_offset": 2048, 00:08:42.375 "data_size": 63488 00:08:42.375 } 00:08:42.375 ] 00:08:42.375 }' 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.375 11:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.942 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:42.942 11:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:42.942 [2024-11-15 11:20:25.834543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.879 "name": "raid_bdev1", 00:08:43.879 "uuid": "59df2d92-d29c-49f1-a613-014985a57721", 00:08:43.879 "strip_size_kb": 64, 00:08:43.879 "state": "online", 00:08:43.879 "raid_level": "concat", 00:08:43.879 "superblock": true, 00:08:43.879 "num_base_bdevs": 2, 00:08:43.879 "num_base_bdevs_discovered": 2, 00:08:43.879 "num_base_bdevs_operational": 2, 00:08:43.879 "base_bdevs_list": [ 00:08:43.879 { 00:08:43.879 "name": "BaseBdev1", 00:08:43.879 "uuid": "aece2250-41f9-5526-b4d5-85aacd5559df", 00:08:43.879 "is_configured": true, 00:08:43.879 "data_offset": 2048, 00:08:43.879 "data_size": 63488 00:08:43.879 }, 00:08:43.879 { 00:08:43.879 "name": "BaseBdev2", 00:08:43.879 "uuid": "022ac20e-554f-5d9a-8b57-20582fb9c1df", 00:08:43.879 "is_configured": true, 00:08:43.879 "data_offset": 2048, 00:08:43.879 "data_size": 63488 00:08:43.879 } 00:08:43.879 ] 00:08:43.879 }' 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.879 11:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.446 [2024-11-15 11:20:27.286827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.446 [2024-11-15 11:20:27.286870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.446 [2024-11-15 11:20:27.290346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.446 [2024-11-15 11:20:27.290590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.446 [2024-11-15 11:20:27.290675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.446 [2024-11-15 11:20:27.290929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:44.446 { 00:08:44.446 "results": [ 00:08:44.446 { 00:08:44.446 "job": "raid_bdev1", 00:08:44.446 "core_mask": "0x1", 00:08:44.446 "workload": "randrw", 00:08:44.446 "percentage": 50, 00:08:44.446 "status": "finished", 00:08:44.446 "queue_depth": 1, 00:08:44.446 "io_size": 131072, 00:08:44.446 "runtime": 1.449325, 00:08:44.446 "iops": 10648.405292118745, 00:08:44.446 "mibps": 1331.0506615148431, 00:08:44.446 "io_failed": 1, 00:08:44.446 "io_timeout": 0, 00:08:44.446 "avg_latency_us": 131.68653480509383, 00:08:44.446 "min_latency_us": 35.374545454545455, 00:08:44.446 "max_latency_us": 1802.24 00:08:44.446 } 00:08:44.446 ], 00:08:44.446 "core_count": 1 00:08:44.446 } 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62248 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62248 ']' 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62248 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62248 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:44.446 killing process with pid 62248 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62248' 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62248 00:08:44.446 11:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62248 00:08:44.446 [2024-11-15 11:20:27.332074] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.705 [2024-11-15 11:20:27.472384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lKI1xxF5jb 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:46.082 ************************************ 00:08:46.082 END TEST raid_read_error_test 00:08:46.082 ************************************ 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:08:46.082 00:08:46.082 real 0m4.669s 00:08:46.082 user 0m5.798s 00:08:46.082 sys 0m0.631s 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.082 11:20:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.082 11:20:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:46.082 11:20:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:46.082 11:20:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.082 11:20:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.082 ************************************ 00:08:46.082 START TEST raid_write_error_test 00:08:46.082 ************************************ 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mGB8NLWid2 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62389 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62389 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62389 ']' 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.082 11:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.082 [2024-11-15 11:20:28.789684] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:46.082 [2024-11-15 11:20:28.789868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62389 ] 00:08:46.082 [2024-11-15 11:20:28.979372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.341 [2024-11-15 11:20:29.130824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.599 [2024-11-15 11:20:29.342404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.599 [2024-11-15 11:20:29.342444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.857 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.857 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:46.857 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:46.857 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:46.857 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.857 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.116 BaseBdev1_malloc 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.116 true 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.116 [2024-11-15 11:20:29.827116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:47.116 [2024-11-15 11:20:29.827233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.116 [2024-11-15 11:20:29.827286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:47.116 [2024-11-15 11:20:29.827325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.116 [2024-11-15 11:20:29.830362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.116 [2024-11-15 11:20:29.830412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:47.116 BaseBdev1 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.116 BaseBdev2_malloc 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.116 true 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.116 [2024-11-15 11:20:29.883035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:47.116 [2024-11-15 11:20:29.883096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.116 [2024-11-15 11:20:29.883127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:47.116 [2024-11-15 11:20:29.883151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.116 [2024-11-15 11:20:29.886214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.116 [2024-11-15 11:20:29.886261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:47.116 BaseBdev2 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.116 [2024-11-15 11:20:29.891139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.116 [2024-11-15 11:20:29.893785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.116 [2024-11-15 11:20:29.894055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.116 [2024-11-15 11:20:29.894078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:47.116 [2024-11-15 11:20:29.894424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:47.116 [2024-11-15 11:20:29.894693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.116 [2024-11-15 11:20:29.894712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:47.116 [2024-11-15 11:20:29.894883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.116 "name": "raid_bdev1", 00:08:47.116 "uuid": "a82e9161-7397-42c1-8bfa-0592b5c286b6", 00:08:47.116 "strip_size_kb": 64, 00:08:47.116 "state": "online", 00:08:47.116 "raid_level": "concat", 00:08:47.116 "superblock": true, 00:08:47.116 "num_base_bdevs": 2, 00:08:47.116 "num_base_bdevs_discovered": 2, 00:08:47.116 "num_base_bdevs_operational": 2, 00:08:47.116 "base_bdevs_list": [ 00:08:47.116 { 00:08:47.116 "name": "BaseBdev1", 00:08:47.116 "uuid": "8107dd1f-b533-5bb8-9f4c-c1d3b416abd9", 00:08:47.116 "is_configured": true, 00:08:47.116 "data_offset": 2048, 00:08:47.116 "data_size": 63488 00:08:47.116 }, 00:08:47.116 { 00:08:47.116 "name": "BaseBdev2", 00:08:47.116 "uuid": "650c3d38-9263-55ea-83d6-beac5407c6da", 00:08:47.116 "is_configured": true, 00:08:47.116 "data_offset": 2048, 00:08:47.116 "data_size": 63488 00:08:47.116 } 00:08:47.116 ] 00:08:47.116 }' 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.116 11:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.683 11:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:47.683 11:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:47.683 [2024-11-15 11:20:30.544906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.620 "name": "raid_bdev1", 00:08:48.620 "uuid": "a82e9161-7397-42c1-8bfa-0592b5c286b6", 00:08:48.620 "strip_size_kb": 64, 00:08:48.620 "state": "online", 00:08:48.620 "raid_level": "concat", 00:08:48.620 "superblock": true, 00:08:48.620 "num_base_bdevs": 2, 00:08:48.620 "num_base_bdevs_discovered": 2, 00:08:48.620 "num_base_bdevs_operational": 2, 00:08:48.620 "base_bdevs_list": [ 00:08:48.620 { 00:08:48.620 "name": "BaseBdev1", 00:08:48.620 "uuid": "8107dd1f-b533-5bb8-9f4c-c1d3b416abd9", 00:08:48.620 "is_configured": true, 00:08:48.620 "data_offset": 2048, 00:08:48.620 "data_size": 63488 00:08:48.620 }, 00:08:48.620 { 00:08:48.620 "name": "BaseBdev2", 00:08:48.620 "uuid": "650c3d38-9263-55ea-83d6-beac5407c6da", 00:08:48.620 "is_configured": true, 00:08:48.620 "data_offset": 2048, 00:08:48.620 "data_size": 63488 00:08:48.620 } 00:08:48.620 ] 00:08:48.620 }' 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.620 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.209 [2024-11-15 11:20:31.929008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.209 [2024-11-15 11:20:31.929048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.209 [2024-11-15 11:20:31.932277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.209 [2024-11-15 11:20:31.932334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.209 [2024-11-15 11:20:31.932380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.209 [2024-11-15 11:20:31.932402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:49.209 { 00:08:49.209 "results": [ 00:08:49.209 { 00:08:49.209 "job": "raid_bdev1", 00:08:49.209 "core_mask": "0x1", 00:08:49.209 "workload": "randrw", 00:08:49.209 "percentage": 50, 00:08:49.209 "status": "finished", 00:08:49.209 "queue_depth": 1, 00:08:49.209 "io_size": 131072, 00:08:49.209 "runtime": 1.381344, 00:08:49.209 "iops": 10320.383626381263, 00:08:49.209 "mibps": 1290.0479532976578, 00:08:49.209 "io_failed": 1, 00:08:49.209 "io_timeout": 0, 00:08:49.209 "avg_latency_us": 135.92874084181932, 00:08:49.209 "min_latency_us": 36.77090909090909, 00:08:49.209 "max_latency_us": 1720.32 00:08:49.209 } 00:08:49.209 ], 00:08:49.209 "core_count": 1 00:08:49.209 } 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62389 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62389 ']' 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62389 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62389 00:08:49.209 killing process with pid 62389 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62389' 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62389 00:08:49.209 11:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62389 00:08:49.209 [2024-11-15 11:20:31.968811] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.209 [2024-11-15 11:20:32.081406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mGB8NLWid2 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:50.586 00:08:50.586 real 0m4.474s 00:08:50.586 user 0m5.573s 00:08:50.586 sys 0m0.616s 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.586 ************************************ 00:08:50.586 END TEST raid_write_error_test 00:08:50.586 ************************************ 00:08:50.586 11:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.586 11:20:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:50.586 11:20:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:50.586 11:20:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:50.586 11:20:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.586 11:20:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.586 ************************************ 00:08:50.586 START TEST raid_state_function_test 00:08:50.586 ************************************ 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:50.586 Process raid pid: 62533 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62533 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62533' 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62533 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62533 ']' 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.586 11:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.586 [2024-11-15 11:20:33.314448] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:50.586 [2024-11-15 11:20:33.314687] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.586 [2024-11-15 11:20:33.502915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.845 [2024-11-15 11:20:33.643324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.103 [2024-11-15 11:20:33.867260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.103 [2024-11-15 11:20:33.867327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.362 [2024-11-15 11:20:34.290883] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.362 [2024-11-15 11:20:34.290957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.362 [2024-11-15 11:20:34.290981] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:51.362 [2024-11-15 11:20:34.291003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.362 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.622 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.622 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.622 "name": "Existed_Raid", 00:08:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.622 "strip_size_kb": 0, 00:08:51.622 "state": "configuring", 00:08:51.622 "raid_level": "raid1", 00:08:51.622 "superblock": false, 00:08:51.622 "num_base_bdevs": 2, 00:08:51.622 "num_base_bdevs_discovered": 0, 00:08:51.622 "num_base_bdevs_operational": 2, 00:08:51.622 "base_bdevs_list": [ 00:08:51.622 { 00:08:51.622 "name": "BaseBdev1", 00:08:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.622 "is_configured": false, 00:08:51.622 "data_offset": 0, 00:08:51.622 "data_size": 0 00:08:51.622 }, 00:08:51.622 { 00:08:51.622 "name": "BaseBdev2", 00:08:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.622 "is_configured": false, 00:08:51.622 "data_offset": 0, 00:08:51.622 "data_size": 0 00:08:51.622 } 00:08:51.622 ] 00:08:51.622 }' 00:08:51.622 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.622 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.881 [2024-11-15 11:20:34.778961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:51.881 [2024-11-15 11:20:34.779157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.881 [2024-11-15 11:20:34.786917] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.881 [2024-11-15 11:20:34.787122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.881 [2024-11-15 11:20:34.787346] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:51.881 [2024-11-15 11:20:34.787572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.881 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 [2024-11-15 11:20:34.831726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.140 BaseBdev1 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 [ 00:08:52.140 { 00:08:52.140 "name": "BaseBdev1", 00:08:52.140 "aliases": [ 00:08:52.140 "0c94691e-2e2d-460d-8937-d4fff8497324" 00:08:52.140 ], 00:08:52.140 "product_name": "Malloc disk", 00:08:52.140 "block_size": 512, 00:08:52.140 "num_blocks": 65536, 00:08:52.140 "uuid": "0c94691e-2e2d-460d-8937-d4fff8497324", 00:08:52.140 "assigned_rate_limits": { 00:08:52.140 "rw_ios_per_sec": 0, 00:08:52.140 "rw_mbytes_per_sec": 0, 00:08:52.140 "r_mbytes_per_sec": 0, 00:08:52.140 "w_mbytes_per_sec": 0 00:08:52.140 }, 00:08:52.140 "claimed": true, 00:08:52.140 "claim_type": "exclusive_write", 00:08:52.140 "zoned": false, 00:08:52.140 "supported_io_types": { 00:08:52.140 "read": true, 00:08:52.140 "write": true, 00:08:52.140 "unmap": true, 00:08:52.140 "flush": true, 00:08:52.140 "reset": true, 00:08:52.140 "nvme_admin": false, 00:08:52.140 "nvme_io": false, 00:08:52.140 "nvme_io_md": false, 00:08:52.140 "write_zeroes": true, 00:08:52.140 "zcopy": true, 00:08:52.140 "get_zone_info": false, 00:08:52.140 "zone_management": false, 00:08:52.140 "zone_append": false, 00:08:52.140 "compare": false, 00:08:52.140 "compare_and_write": false, 00:08:52.140 "abort": true, 00:08:52.140 "seek_hole": false, 00:08:52.140 "seek_data": false, 00:08:52.140 "copy": true, 00:08:52.140 "nvme_iov_md": false 00:08:52.140 }, 00:08:52.140 "memory_domains": [ 00:08:52.140 { 00:08:52.140 "dma_device_id": "system", 00:08:52.140 "dma_device_type": 1 00:08:52.140 }, 00:08:52.140 { 00:08:52.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.140 "dma_device_type": 2 00:08:52.140 } 00:08:52.140 ], 00:08:52.140 "driver_specific": {} 00:08:52.140 } 00:08:52.140 ] 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.140 "name": "Existed_Raid", 00:08:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.140 "strip_size_kb": 0, 00:08:52.140 "state": "configuring", 00:08:52.140 "raid_level": "raid1", 00:08:52.140 "superblock": false, 00:08:52.140 "num_base_bdevs": 2, 00:08:52.140 "num_base_bdevs_discovered": 1, 00:08:52.140 "num_base_bdevs_operational": 2, 00:08:52.140 "base_bdevs_list": [ 00:08:52.140 { 00:08:52.140 "name": "BaseBdev1", 00:08:52.140 "uuid": "0c94691e-2e2d-460d-8937-d4fff8497324", 00:08:52.140 "is_configured": true, 00:08:52.140 "data_offset": 0, 00:08:52.140 "data_size": 65536 00:08:52.140 }, 00:08:52.140 { 00:08:52.140 "name": "BaseBdev2", 00:08:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.140 "is_configured": false, 00:08:52.140 "data_offset": 0, 00:08:52.140 "data_size": 0 00:08:52.140 } 00:08:52.140 ] 00:08:52.140 }' 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.140 11:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.708 [2024-11-15 11:20:35.367980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.708 [2024-11-15 11:20:35.368065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.708 [2024-11-15 11:20:35.375960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.708 [2024-11-15 11:20:35.378961] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.708 [2024-11-15 11:20:35.379162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.708 "name": "Existed_Raid", 00:08:52.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.708 "strip_size_kb": 0, 00:08:52.708 "state": "configuring", 00:08:52.708 "raid_level": "raid1", 00:08:52.708 "superblock": false, 00:08:52.708 "num_base_bdevs": 2, 00:08:52.708 "num_base_bdevs_discovered": 1, 00:08:52.708 "num_base_bdevs_operational": 2, 00:08:52.708 "base_bdevs_list": [ 00:08:52.708 { 00:08:52.708 "name": "BaseBdev1", 00:08:52.708 "uuid": "0c94691e-2e2d-460d-8937-d4fff8497324", 00:08:52.708 "is_configured": true, 00:08:52.708 "data_offset": 0, 00:08:52.708 "data_size": 65536 00:08:52.708 }, 00:08:52.708 { 00:08:52.708 "name": "BaseBdev2", 00:08:52.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.708 "is_configured": false, 00:08:52.708 "data_offset": 0, 00:08:52.708 "data_size": 0 00:08:52.708 } 00:08:52.708 ] 00:08:52.708 }' 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.708 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.967 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:52.967 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.967 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.226 [2024-11-15 11:20:35.942042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.226 [2024-11-15 11:20:35.942107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.226 [2024-11-15 11:20:35.942119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:53.226 [2024-11-15 11:20:35.942518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:53.226 [2024-11-15 11:20:35.942752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.226 [2024-11-15 11:20:35.942777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:53.226 [2024-11-15 11:20:35.943117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.226 BaseBdev2 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.226 [ 00:08:53.226 { 00:08:53.226 "name": "BaseBdev2", 00:08:53.226 "aliases": [ 00:08:53.226 "5f7a0b28-0b78-4f49-83ed-f19c601f5d68" 00:08:53.226 ], 00:08:53.226 "product_name": "Malloc disk", 00:08:53.226 "block_size": 512, 00:08:53.226 "num_blocks": 65536, 00:08:53.226 "uuid": "5f7a0b28-0b78-4f49-83ed-f19c601f5d68", 00:08:53.226 "assigned_rate_limits": { 00:08:53.226 "rw_ios_per_sec": 0, 00:08:53.226 "rw_mbytes_per_sec": 0, 00:08:53.226 "r_mbytes_per_sec": 0, 00:08:53.226 "w_mbytes_per_sec": 0 00:08:53.226 }, 00:08:53.226 "claimed": true, 00:08:53.226 "claim_type": "exclusive_write", 00:08:53.226 "zoned": false, 00:08:53.226 "supported_io_types": { 00:08:53.226 "read": true, 00:08:53.226 "write": true, 00:08:53.226 "unmap": true, 00:08:53.226 "flush": true, 00:08:53.226 "reset": true, 00:08:53.226 "nvme_admin": false, 00:08:53.226 "nvme_io": false, 00:08:53.226 "nvme_io_md": false, 00:08:53.226 "write_zeroes": true, 00:08:53.226 "zcopy": true, 00:08:53.226 "get_zone_info": false, 00:08:53.226 "zone_management": false, 00:08:53.226 "zone_append": false, 00:08:53.226 "compare": false, 00:08:53.226 "compare_and_write": false, 00:08:53.226 "abort": true, 00:08:53.226 "seek_hole": false, 00:08:53.226 "seek_data": false, 00:08:53.226 "copy": true, 00:08:53.226 "nvme_iov_md": false 00:08:53.226 }, 00:08:53.226 "memory_domains": [ 00:08:53.226 { 00:08:53.226 "dma_device_id": "system", 00:08:53.226 "dma_device_type": 1 00:08:53.226 }, 00:08:53.226 { 00:08:53.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.226 "dma_device_type": 2 00:08:53.226 } 00:08:53.226 ], 00:08:53.226 "driver_specific": {} 00:08:53.226 } 00:08:53.226 ] 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:53.226 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.227 11:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.227 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.227 "name": "Existed_Raid", 00:08:53.227 "uuid": "ced3135c-1f52-4e08-9ca2-f8df3c2f2079", 00:08:53.227 "strip_size_kb": 0, 00:08:53.227 "state": "online", 00:08:53.227 "raid_level": "raid1", 00:08:53.227 "superblock": false, 00:08:53.227 "num_base_bdevs": 2, 00:08:53.227 "num_base_bdevs_discovered": 2, 00:08:53.227 "num_base_bdevs_operational": 2, 00:08:53.227 "base_bdevs_list": [ 00:08:53.227 { 00:08:53.227 "name": "BaseBdev1", 00:08:53.227 "uuid": "0c94691e-2e2d-460d-8937-d4fff8497324", 00:08:53.227 "is_configured": true, 00:08:53.227 "data_offset": 0, 00:08:53.227 "data_size": 65536 00:08:53.227 }, 00:08:53.227 { 00:08:53.227 "name": "BaseBdev2", 00:08:53.227 "uuid": "5f7a0b28-0b78-4f49-83ed-f19c601f5d68", 00:08:53.227 "is_configured": true, 00:08:53.227 "data_offset": 0, 00:08:53.227 "data_size": 65536 00:08:53.227 } 00:08:53.227 ] 00:08:53.227 }' 00:08:53.227 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.227 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.794 [2024-11-15 11:20:36.526649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.794 "name": "Existed_Raid", 00:08:53.794 "aliases": [ 00:08:53.794 "ced3135c-1f52-4e08-9ca2-f8df3c2f2079" 00:08:53.794 ], 00:08:53.794 "product_name": "Raid Volume", 00:08:53.794 "block_size": 512, 00:08:53.794 "num_blocks": 65536, 00:08:53.794 "uuid": "ced3135c-1f52-4e08-9ca2-f8df3c2f2079", 00:08:53.794 "assigned_rate_limits": { 00:08:53.794 "rw_ios_per_sec": 0, 00:08:53.794 "rw_mbytes_per_sec": 0, 00:08:53.794 "r_mbytes_per_sec": 0, 00:08:53.794 "w_mbytes_per_sec": 0 00:08:53.794 }, 00:08:53.794 "claimed": false, 00:08:53.794 "zoned": false, 00:08:53.794 "supported_io_types": { 00:08:53.794 "read": true, 00:08:53.794 "write": true, 00:08:53.794 "unmap": false, 00:08:53.794 "flush": false, 00:08:53.794 "reset": true, 00:08:53.794 "nvme_admin": false, 00:08:53.794 "nvme_io": false, 00:08:53.794 "nvme_io_md": false, 00:08:53.794 "write_zeroes": true, 00:08:53.794 "zcopy": false, 00:08:53.794 "get_zone_info": false, 00:08:53.794 "zone_management": false, 00:08:53.794 "zone_append": false, 00:08:53.794 "compare": false, 00:08:53.794 "compare_and_write": false, 00:08:53.794 "abort": false, 00:08:53.794 "seek_hole": false, 00:08:53.794 "seek_data": false, 00:08:53.794 "copy": false, 00:08:53.794 "nvme_iov_md": false 00:08:53.794 }, 00:08:53.794 "memory_domains": [ 00:08:53.794 { 00:08:53.794 "dma_device_id": "system", 00:08:53.794 "dma_device_type": 1 00:08:53.794 }, 00:08:53.794 { 00:08:53.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.794 "dma_device_type": 2 00:08:53.794 }, 00:08:53.794 { 00:08:53.794 "dma_device_id": "system", 00:08:53.794 "dma_device_type": 1 00:08:53.794 }, 00:08:53.794 { 00:08:53.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.794 "dma_device_type": 2 00:08:53.794 } 00:08:53.794 ], 00:08:53.794 "driver_specific": { 00:08:53.794 "raid": { 00:08:53.794 "uuid": "ced3135c-1f52-4e08-9ca2-f8df3c2f2079", 00:08:53.794 "strip_size_kb": 0, 00:08:53.794 "state": "online", 00:08:53.794 "raid_level": "raid1", 00:08:53.794 "superblock": false, 00:08:53.794 "num_base_bdevs": 2, 00:08:53.794 "num_base_bdevs_discovered": 2, 00:08:53.794 "num_base_bdevs_operational": 2, 00:08:53.794 "base_bdevs_list": [ 00:08:53.794 { 00:08:53.794 "name": "BaseBdev1", 00:08:53.794 "uuid": "0c94691e-2e2d-460d-8937-d4fff8497324", 00:08:53.794 "is_configured": true, 00:08:53.794 "data_offset": 0, 00:08:53.794 "data_size": 65536 00:08:53.794 }, 00:08:53.794 { 00:08:53.794 "name": "BaseBdev2", 00:08:53.794 "uuid": "5f7a0b28-0b78-4f49-83ed-f19c601f5d68", 00:08:53.794 "is_configured": true, 00:08:53.794 "data_offset": 0, 00:08:53.794 "data_size": 65536 00:08:53.794 } 00:08:53.794 ] 00:08:53.794 } 00:08:53.794 } 00:08:53.794 }' 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:53.794 BaseBdev2' 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.794 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.052 [2024-11-15 11:20:36.794484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.052 "name": "Existed_Raid", 00:08:54.052 "uuid": "ced3135c-1f52-4e08-9ca2-f8df3c2f2079", 00:08:54.052 "strip_size_kb": 0, 00:08:54.052 "state": "online", 00:08:54.052 "raid_level": "raid1", 00:08:54.052 "superblock": false, 00:08:54.052 "num_base_bdevs": 2, 00:08:54.052 "num_base_bdevs_discovered": 1, 00:08:54.052 "num_base_bdevs_operational": 1, 00:08:54.052 "base_bdevs_list": [ 00:08:54.052 { 00:08:54.052 "name": null, 00:08:54.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.052 "is_configured": false, 00:08:54.052 "data_offset": 0, 00:08:54.052 "data_size": 65536 00:08:54.052 }, 00:08:54.052 { 00:08:54.052 "name": "BaseBdev2", 00:08:54.052 "uuid": "5f7a0b28-0b78-4f49-83ed-f19c601f5d68", 00:08:54.052 "is_configured": true, 00:08:54.052 "data_offset": 0, 00:08:54.052 "data_size": 65536 00:08:54.052 } 00:08:54.052 ] 00:08:54.052 }' 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.052 11:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.618 [2024-11-15 11:20:37.451028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:54.618 [2024-11-15 11:20:37.451352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.618 [2024-11-15 11:20:37.530639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.618 [2024-11-15 11:20:37.531008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.618 [2024-11-15 11:20:37.531043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.618 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.877 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:54.877 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:54.877 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:54.877 11:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62533 00:08:54.877 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62533 ']' 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62533 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62533 00:08:54.878 killing process with pid 62533 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62533' 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62533 00:08:54.878 [2024-11-15 11:20:37.615513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.878 11:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62533 00:08:54.878 [2024-11-15 11:20:37.629582] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:55.870 00:08:55.870 real 0m5.469s 00:08:55.870 user 0m8.174s 00:08:55.870 sys 0m0.869s 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.870 ************************************ 00:08:55.870 END TEST raid_state_function_test 00:08:55.870 ************************************ 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.870 11:20:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:55.870 11:20:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:55.870 11:20:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.870 11:20:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.870 ************************************ 00:08:55.870 START TEST raid_state_function_test_sb 00:08:55.870 ************************************ 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:55.870 Process raid pid: 62786 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62786 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62786' 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62786 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62786 ']' 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.870 11:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.128 [2024-11-15 11:20:38.841361] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:56.128 [2024-11-15 11:20:38.841567] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.128 [2024-11-15 11:20:39.049836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.394 [2024-11-15 11:20:39.187647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.660 [2024-11-15 11:20:39.404719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.660 [2024-11-15 11:20:39.404771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.918 [2024-11-15 11:20:39.822912] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.918 [2024-11-15 11:20:39.823157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.918 [2024-11-15 11:20:39.823216] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.918 [2024-11-15 11:20:39.823236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.918 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.919 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.177 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.177 "name": "Existed_Raid", 00:08:57.177 "uuid": "b5824eb3-a159-42b4-b58d-167aaf22e558", 00:08:57.177 "strip_size_kb": 0, 00:08:57.177 "state": "configuring", 00:08:57.177 "raid_level": "raid1", 00:08:57.177 "superblock": true, 00:08:57.177 "num_base_bdevs": 2, 00:08:57.177 "num_base_bdevs_discovered": 0, 00:08:57.177 "num_base_bdevs_operational": 2, 00:08:57.177 "base_bdevs_list": [ 00:08:57.177 { 00:08:57.177 "name": "BaseBdev1", 00:08:57.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.177 "is_configured": false, 00:08:57.177 "data_offset": 0, 00:08:57.177 "data_size": 0 00:08:57.177 }, 00:08:57.177 { 00:08:57.177 "name": "BaseBdev2", 00:08:57.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.177 "is_configured": false, 00:08:57.177 "data_offset": 0, 00:08:57.177 "data_size": 0 00:08:57.177 } 00:08:57.177 ] 00:08:57.177 }' 00:08:57.177 11:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.177 11:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 [2024-11-15 11:20:40.310995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.436 [2024-11-15 11:20:40.311246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 [2024-11-15 11:20:40.318991] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.436 [2024-11-15 11:20:40.319249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.436 [2024-11-15 11:20:40.319276] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.436 [2024-11-15 11:20:40.319300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 [2024-11-15 11:20:40.364798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.436 BaseBdev1 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.436 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.695 [ 00:08:57.695 { 00:08:57.695 "name": "BaseBdev1", 00:08:57.695 "aliases": [ 00:08:57.695 "77c6c465-5d70-4bb1-9217-f2bbc0ec6a2d" 00:08:57.695 ], 00:08:57.695 "product_name": "Malloc disk", 00:08:57.695 "block_size": 512, 00:08:57.695 "num_blocks": 65536, 00:08:57.695 "uuid": "77c6c465-5d70-4bb1-9217-f2bbc0ec6a2d", 00:08:57.695 "assigned_rate_limits": { 00:08:57.695 "rw_ios_per_sec": 0, 00:08:57.695 "rw_mbytes_per_sec": 0, 00:08:57.695 "r_mbytes_per_sec": 0, 00:08:57.695 "w_mbytes_per_sec": 0 00:08:57.695 }, 00:08:57.695 "claimed": true, 00:08:57.695 "claim_type": "exclusive_write", 00:08:57.695 "zoned": false, 00:08:57.695 "supported_io_types": { 00:08:57.695 "read": true, 00:08:57.695 "write": true, 00:08:57.695 "unmap": true, 00:08:57.695 "flush": true, 00:08:57.695 "reset": true, 00:08:57.695 "nvme_admin": false, 00:08:57.695 "nvme_io": false, 00:08:57.695 "nvme_io_md": false, 00:08:57.695 "write_zeroes": true, 00:08:57.695 "zcopy": true, 00:08:57.695 "get_zone_info": false, 00:08:57.695 "zone_management": false, 00:08:57.695 "zone_append": false, 00:08:57.695 "compare": false, 00:08:57.695 "compare_and_write": false, 00:08:57.695 "abort": true, 00:08:57.695 "seek_hole": false, 00:08:57.695 "seek_data": false, 00:08:57.695 "copy": true, 00:08:57.695 "nvme_iov_md": false 00:08:57.695 }, 00:08:57.695 "memory_domains": [ 00:08:57.695 { 00:08:57.695 "dma_device_id": "system", 00:08:57.695 "dma_device_type": 1 00:08:57.695 }, 00:08:57.695 { 00:08:57.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.695 "dma_device_type": 2 00:08:57.695 } 00:08:57.695 ], 00:08:57.695 "driver_specific": {} 00:08:57.695 } 00:08:57.695 ] 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.695 "name": "Existed_Raid", 00:08:57.695 "uuid": "cbbf4fa1-bb6b-4176-b532-eec1a113a560", 00:08:57.695 "strip_size_kb": 0, 00:08:57.695 "state": "configuring", 00:08:57.695 "raid_level": "raid1", 00:08:57.695 "superblock": true, 00:08:57.695 "num_base_bdevs": 2, 00:08:57.695 "num_base_bdevs_discovered": 1, 00:08:57.695 "num_base_bdevs_operational": 2, 00:08:57.695 "base_bdevs_list": [ 00:08:57.695 { 00:08:57.695 "name": "BaseBdev1", 00:08:57.695 "uuid": "77c6c465-5d70-4bb1-9217-f2bbc0ec6a2d", 00:08:57.695 "is_configured": true, 00:08:57.695 "data_offset": 2048, 00:08:57.695 "data_size": 63488 00:08:57.695 }, 00:08:57.695 { 00:08:57.695 "name": "BaseBdev2", 00:08:57.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.695 "is_configured": false, 00:08:57.695 "data_offset": 0, 00:08:57.695 "data_size": 0 00:08:57.695 } 00:08:57.695 ] 00:08:57.695 }' 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.695 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.263 [2024-11-15 11:20:40.929037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.263 [2024-11-15 11:20:40.929102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.263 [2024-11-15 11:20:40.937048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.263 [2024-11-15 11:20:40.939663] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.263 [2024-11-15 11:20:40.939717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.263 "name": "Existed_Raid", 00:08:58.263 "uuid": "9bc8f43c-8130-486d-a971-40b600534f9e", 00:08:58.263 "strip_size_kb": 0, 00:08:58.263 "state": "configuring", 00:08:58.263 "raid_level": "raid1", 00:08:58.263 "superblock": true, 00:08:58.263 "num_base_bdevs": 2, 00:08:58.263 "num_base_bdevs_discovered": 1, 00:08:58.263 "num_base_bdevs_operational": 2, 00:08:58.263 "base_bdevs_list": [ 00:08:58.263 { 00:08:58.263 "name": "BaseBdev1", 00:08:58.263 "uuid": "77c6c465-5d70-4bb1-9217-f2bbc0ec6a2d", 00:08:58.263 "is_configured": true, 00:08:58.263 "data_offset": 2048, 00:08:58.263 "data_size": 63488 00:08:58.263 }, 00:08:58.263 { 00:08:58.263 "name": "BaseBdev2", 00:08:58.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.263 "is_configured": false, 00:08:58.263 "data_offset": 0, 00:08:58.263 "data_size": 0 00:08:58.263 } 00:08:58.263 ] 00:08:58.263 }' 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.263 11:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.522 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.522 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.522 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.782 [2024-11-15 11:20:41.481222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.782 [2024-11-15 11:20:41.481713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:58.782 [2024-11-15 11:20:41.481734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:58.782 [2024-11-15 11:20:41.482165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:58.782 BaseBdev2 00:08:58.782 [2024-11-15 11:20:41.482386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:58.782 [2024-11-15 11:20:41.482409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:58.782 [2024-11-15 11:20:41.482638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.782 [ 00:08:58.782 { 00:08:58.782 "name": "BaseBdev2", 00:08:58.782 "aliases": [ 00:08:58.782 "6312db37-bcae-4b5a-a4dd-19bfe317d525" 00:08:58.782 ], 00:08:58.782 "product_name": "Malloc disk", 00:08:58.782 "block_size": 512, 00:08:58.782 "num_blocks": 65536, 00:08:58.782 "uuid": "6312db37-bcae-4b5a-a4dd-19bfe317d525", 00:08:58.782 "assigned_rate_limits": { 00:08:58.782 "rw_ios_per_sec": 0, 00:08:58.782 "rw_mbytes_per_sec": 0, 00:08:58.782 "r_mbytes_per_sec": 0, 00:08:58.782 "w_mbytes_per_sec": 0 00:08:58.782 }, 00:08:58.782 "claimed": true, 00:08:58.782 "claim_type": "exclusive_write", 00:08:58.782 "zoned": false, 00:08:58.782 "supported_io_types": { 00:08:58.782 "read": true, 00:08:58.782 "write": true, 00:08:58.782 "unmap": true, 00:08:58.782 "flush": true, 00:08:58.782 "reset": true, 00:08:58.782 "nvme_admin": false, 00:08:58.782 "nvme_io": false, 00:08:58.782 "nvme_io_md": false, 00:08:58.782 "write_zeroes": true, 00:08:58.782 "zcopy": true, 00:08:58.782 "get_zone_info": false, 00:08:58.782 "zone_management": false, 00:08:58.782 "zone_append": false, 00:08:58.782 "compare": false, 00:08:58.782 "compare_and_write": false, 00:08:58.782 "abort": true, 00:08:58.782 "seek_hole": false, 00:08:58.782 "seek_data": false, 00:08:58.782 "copy": true, 00:08:58.782 "nvme_iov_md": false 00:08:58.782 }, 00:08:58.782 "memory_domains": [ 00:08:58.782 { 00:08:58.782 "dma_device_id": "system", 00:08:58.782 "dma_device_type": 1 00:08:58.782 }, 00:08:58.782 { 00:08:58.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.782 "dma_device_type": 2 00:08:58.782 } 00:08:58.782 ], 00:08:58.782 "driver_specific": {} 00:08:58.782 } 00:08:58.782 ] 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.782 "name": "Existed_Raid", 00:08:58.782 "uuid": "9bc8f43c-8130-486d-a971-40b600534f9e", 00:08:58.782 "strip_size_kb": 0, 00:08:58.782 "state": "online", 00:08:58.782 "raid_level": "raid1", 00:08:58.782 "superblock": true, 00:08:58.782 "num_base_bdevs": 2, 00:08:58.782 "num_base_bdevs_discovered": 2, 00:08:58.782 "num_base_bdevs_operational": 2, 00:08:58.782 "base_bdevs_list": [ 00:08:58.782 { 00:08:58.782 "name": "BaseBdev1", 00:08:58.782 "uuid": "77c6c465-5d70-4bb1-9217-f2bbc0ec6a2d", 00:08:58.782 "is_configured": true, 00:08:58.782 "data_offset": 2048, 00:08:58.782 "data_size": 63488 00:08:58.782 }, 00:08:58.782 { 00:08:58.782 "name": "BaseBdev2", 00:08:58.782 "uuid": "6312db37-bcae-4b5a-a4dd-19bfe317d525", 00:08:58.782 "is_configured": true, 00:08:58.782 "data_offset": 2048, 00:08:58.782 "data_size": 63488 00:08:58.782 } 00:08:58.782 ] 00:08:58.782 }' 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.782 11:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.350 [2024-11-15 11:20:42.009951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.350 "name": "Existed_Raid", 00:08:59.350 "aliases": [ 00:08:59.350 "9bc8f43c-8130-486d-a971-40b600534f9e" 00:08:59.350 ], 00:08:59.350 "product_name": "Raid Volume", 00:08:59.350 "block_size": 512, 00:08:59.350 "num_blocks": 63488, 00:08:59.350 "uuid": "9bc8f43c-8130-486d-a971-40b600534f9e", 00:08:59.350 "assigned_rate_limits": { 00:08:59.350 "rw_ios_per_sec": 0, 00:08:59.350 "rw_mbytes_per_sec": 0, 00:08:59.350 "r_mbytes_per_sec": 0, 00:08:59.350 "w_mbytes_per_sec": 0 00:08:59.350 }, 00:08:59.350 "claimed": false, 00:08:59.350 "zoned": false, 00:08:59.350 "supported_io_types": { 00:08:59.350 "read": true, 00:08:59.350 "write": true, 00:08:59.350 "unmap": false, 00:08:59.350 "flush": false, 00:08:59.350 "reset": true, 00:08:59.350 "nvme_admin": false, 00:08:59.350 "nvme_io": false, 00:08:59.350 "nvme_io_md": false, 00:08:59.350 "write_zeroes": true, 00:08:59.350 "zcopy": false, 00:08:59.350 "get_zone_info": false, 00:08:59.350 "zone_management": false, 00:08:59.350 "zone_append": false, 00:08:59.350 "compare": false, 00:08:59.350 "compare_and_write": false, 00:08:59.350 "abort": false, 00:08:59.350 "seek_hole": false, 00:08:59.350 "seek_data": false, 00:08:59.350 "copy": false, 00:08:59.350 "nvme_iov_md": false 00:08:59.350 }, 00:08:59.350 "memory_domains": [ 00:08:59.350 { 00:08:59.350 "dma_device_id": "system", 00:08:59.350 "dma_device_type": 1 00:08:59.350 }, 00:08:59.350 { 00:08:59.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.350 "dma_device_type": 2 00:08:59.350 }, 00:08:59.350 { 00:08:59.350 "dma_device_id": "system", 00:08:59.350 "dma_device_type": 1 00:08:59.350 }, 00:08:59.350 { 00:08:59.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.350 "dma_device_type": 2 00:08:59.350 } 00:08:59.350 ], 00:08:59.350 "driver_specific": { 00:08:59.350 "raid": { 00:08:59.350 "uuid": "9bc8f43c-8130-486d-a971-40b600534f9e", 00:08:59.350 "strip_size_kb": 0, 00:08:59.350 "state": "online", 00:08:59.350 "raid_level": "raid1", 00:08:59.350 "superblock": true, 00:08:59.350 "num_base_bdevs": 2, 00:08:59.350 "num_base_bdevs_discovered": 2, 00:08:59.350 "num_base_bdevs_operational": 2, 00:08:59.350 "base_bdevs_list": [ 00:08:59.350 { 00:08:59.350 "name": "BaseBdev1", 00:08:59.350 "uuid": "77c6c465-5d70-4bb1-9217-f2bbc0ec6a2d", 00:08:59.350 "is_configured": true, 00:08:59.350 "data_offset": 2048, 00:08:59.350 "data_size": 63488 00:08:59.350 }, 00:08:59.350 { 00:08:59.350 "name": "BaseBdev2", 00:08:59.350 "uuid": "6312db37-bcae-4b5a-a4dd-19bfe317d525", 00:08:59.350 "is_configured": true, 00:08:59.350 "data_offset": 2048, 00:08:59.350 "data_size": 63488 00:08:59.350 } 00:08:59.350 ] 00:08:59.350 } 00:08:59.350 } 00:08:59.350 }' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:59.350 BaseBdev2' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.350 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.350 [2024-11-15 11:20:42.261661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.609 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.610 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.610 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.610 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.610 "name": "Existed_Raid", 00:08:59.610 "uuid": "9bc8f43c-8130-486d-a971-40b600534f9e", 00:08:59.610 "strip_size_kb": 0, 00:08:59.610 "state": "online", 00:08:59.610 "raid_level": "raid1", 00:08:59.610 "superblock": true, 00:08:59.610 "num_base_bdevs": 2, 00:08:59.610 "num_base_bdevs_discovered": 1, 00:08:59.610 "num_base_bdevs_operational": 1, 00:08:59.610 "base_bdevs_list": [ 00:08:59.610 { 00:08:59.610 "name": null, 00:08:59.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.610 "is_configured": false, 00:08:59.610 "data_offset": 0, 00:08:59.610 "data_size": 63488 00:08:59.610 }, 00:08:59.610 { 00:08:59.610 "name": "BaseBdev2", 00:08:59.610 "uuid": "6312db37-bcae-4b5a-a4dd-19bfe317d525", 00:08:59.610 "is_configured": true, 00:08:59.610 "data_offset": 2048, 00:08:59.610 "data_size": 63488 00:08:59.610 } 00:08:59.610 ] 00:08:59.610 }' 00:08:59.610 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.610 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:00.176 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.177 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.177 [2024-11-15 11:20:42.910888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.177 [2024-11-15 11:20:42.911251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.177 [2024-11-15 11:20:42.991430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.177 [2024-11-15 11:20:42.991507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.177 [2024-11-15 11:20:42.991527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:00.177 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.177 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.177 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.177 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.177 11:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.177 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.177 11:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62786 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62786 ']' 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62786 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62786 00:09:00.177 killing process with pid 62786 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62786' 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62786 00:09:00.177 [2024-11-15 11:20:43.081851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.177 11:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62786 00:09:00.177 [2024-11-15 11:20:43.097636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.554 11:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:01.554 00:09:01.554 real 0m5.413s 00:09:01.554 user 0m8.022s 00:09:01.554 sys 0m0.884s 00:09:01.554 11:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.554 ************************************ 00:09:01.554 END TEST raid_state_function_test_sb 00:09:01.554 11:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.554 ************************************ 00:09:01.554 11:20:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:01.554 11:20:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:01.554 11:20:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.554 11:20:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.554 ************************************ 00:09:01.554 START TEST raid_superblock_test 00:09:01.554 ************************************ 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63044 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63044 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63044 ']' 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:01.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:01.554 11:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.554 [2024-11-15 11:20:44.314320] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:01.554 [2024-11-15 11:20:44.314526] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63044 ] 00:09:01.554 [2024-11-15 11:20:44.498813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.813 [2024-11-15 11:20:44.633368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.071 [2024-11-15 11:20:44.849424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.071 [2024-11-15 11:20:44.849508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.330 malloc1 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.330 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.330 [2024-11-15 11:20:45.273334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:02.330 [2024-11-15 11:20:45.273421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.330 [2024-11-15 11:20:45.273454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:02.330 [2024-11-15 11:20:45.273469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.330 [2024-11-15 11:20:45.276588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.330 [2024-11-15 11:20:45.276631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:02.589 pt1 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.589 malloc2 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.589 [2024-11-15 11:20:45.329941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:02.589 [2024-11-15 11:20:45.330032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.589 [2024-11-15 11:20:45.330077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:02.589 [2024-11-15 11:20:45.330093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.589 [2024-11-15 11:20:45.333278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.589 [2024-11-15 11:20:45.333321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:02.589 pt2 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.589 [2024-11-15 11:20:45.338078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:02.589 [2024-11-15 11:20:45.340850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.589 [2024-11-15 11:20:45.341323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:02.589 [2024-11-15 11:20:45.341353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:02.589 [2024-11-15 11:20:45.341743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:02.589 [2024-11-15 11:20:45.342011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:02.589 [2024-11-15 11:20:45.342037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:02.589 [2024-11-15 11:20:45.342321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.589 "name": "raid_bdev1", 00:09:02.589 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:02.589 "strip_size_kb": 0, 00:09:02.589 "state": "online", 00:09:02.589 "raid_level": "raid1", 00:09:02.589 "superblock": true, 00:09:02.589 "num_base_bdevs": 2, 00:09:02.589 "num_base_bdevs_discovered": 2, 00:09:02.589 "num_base_bdevs_operational": 2, 00:09:02.589 "base_bdevs_list": [ 00:09:02.589 { 00:09:02.589 "name": "pt1", 00:09:02.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.589 "is_configured": true, 00:09:02.589 "data_offset": 2048, 00:09:02.589 "data_size": 63488 00:09:02.589 }, 00:09:02.589 { 00:09:02.589 "name": "pt2", 00:09:02.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.589 "is_configured": true, 00:09:02.589 "data_offset": 2048, 00:09:02.589 "data_size": 63488 00:09:02.589 } 00:09:02.589 ] 00:09:02.589 }' 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.589 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.209 [2024-11-15 11:20:45.866882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.209 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.209 "name": "raid_bdev1", 00:09:03.209 "aliases": [ 00:09:03.209 "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4" 00:09:03.209 ], 00:09:03.209 "product_name": "Raid Volume", 00:09:03.209 "block_size": 512, 00:09:03.209 "num_blocks": 63488, 00:09:03.209 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:03.209 "assigned_rate_limits": { 00:09:03.209 "rw_ios_per_sec": 0, 00:09:03.209 "rw_mbytes_per_sec": 0, 00:09:03.209 "r_mbytes_per_sec": 0, 00:09:03.209 "w_mbytes_per_sec": 0 00:09:03.209 }, 00:09:03.209 "claimed": false, 00:09:03.209 "zoned": false, 00:09:03.209 "supported_io_types": { 00:09:03.209 "read": true, 00:09:03.209 "write": true, 00:09:03.209 "unmap": false, 00:09:03.209 "flush": false, 00:09:03.209 "reset": true, 00:09:03.209 "nvme_admin": false, 00:09:03.209 "nvme_io": false, 00:09:03.209 "nvme_io_md": false, 00:09:03.209 "write_zeroes": true, 00:09:03.209 "zcopy": false, 00:09:03.209 "get_zone_info": false, 00:09:03.209 "zone_management": false, 00:09:03.209 "zone_append": false, 00:09:03.209 "compare": false, 00:09:03.209 "compare_and_write": false, 00:09:03.209 "abort": false, 00:09:03.209 "seek_hole": false, 00:09:03.209 "seek_data": false, 00:09:03.209 "copy": false, 00:09:03.209 "nvme_iov_md": false 00:09:03.209 }, 00:09:03.209 "memory_domains": [ 00:09:03.209 { 00:09:03.209 "dma_device_id": "system", 00:09:03.209 "dma_device_type": 1 00:09:03.209 }, 00:09:03.209 { 00:09:03.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.209 "dma_device_type": 2 00:09:03.209 }, 00:09:03.209 { 00:09:03.209 "dma_device_id": "system", 00:09:03.209 "dma_device_type": 1 00:09:03.209 }, 00:09:03.209 { 00:09:03.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.209 "dma_device_type": 2 00:09:03.209 } 00:09:03.209 ], 00:09:03.209 "driver_specific": { 00:09:03.209 "raid": { 00:09:03.209 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:03.209 "strip_size_kb": 0, 00:09:03.209 "state": "online", 00:09:03.209 "raid_level": "raid1", 00:09:03.209 "superblock": true, 00:09:03.209 "num_base_bdevs": 2, 00:09:03.209 "num_base_bdevs_discovered": 2, 00:09:03.209 "num_base_bdevs_operational": 2, 00:09:03.209 "base_bdevs_list": [ 00:09:03.209 { 00:09:03.209 "name": "pt1", 00:09:03.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.209 "is_configured": true, 00:09:03.209 "data_offset": 2048, 00:09:03.209 "data_size": 63488 00:09:03.209 }, 00:09:03.210 { 00:09:03.210 "name": "pt2", 00:09:03.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.210 "is_configured": true, 00:09:03.210 "data_offset": 2048, 00:09:03.210 "data_size": 63488 00:09:03.210 } 00:09:03.210 ] 00:09:03.210 } 00:09:03.210 } 00:09:03.210 }' 00:09:03.210 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.210 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:03.210 pt2' 00:09:03.210 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:03.210 [2024-11-15 11:20:46.122978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.210 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4eaff14b-3ad9-4cb8-9df7-0248c095c2e4 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4eaff14b-3ad9-4cb8-9df7-0248c095c2e4 ']' 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.468 [2024-11-15 11:20:46.170577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.468 [2024-11-15 11:20:46.170751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.468 [2024-11-15 11:20:46.170946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.468 [2024-11-15 11:20:46.171162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.468 [2024-11-15 11:20:46.171334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.468 [2024-11-15 11:20:46.298602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:03.468 [2024-11-15 11:20:46.301493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:03.468 [2024-11-15 11:20:46.301609] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:03.468 [2024-11-15 11:20:46.301675] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:03.468 [2024-11-15 11:20:46.301699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.468 [2024-11-15 11:20:46.301713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:03.468 request: 00:09:03.468 { 00:09:03.468 "name": "raid_bdev1", 00:09:03.468 "raid_level": "raid1", 00:09:03.468 "base_bdevs": [ 00:09:03.468 "malloc1", 00:09:03.468 "malloc2" 00:09:03.468 ], 00:09:03.468 "superblock": false, 00:09:03.468 "method": "bdev_raid_create", 00:09:03.468 "req_id": 1 00:09:03.468 } 00:09:03.468 Got JSON-RPC error response 00:09:03.468 response: 00:09:03.468 { 00:09:03.468 "code": -17, 00:09:03.468 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:03.468 } 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.468 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.469 [2024-11-15 11:20:46.366656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.469 [2024-11-15 11:20:46.366855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.469 [2024-11-15 11:20:46.366974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:03.469 [2024-11-15 11:20:46.367102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.469 [2024-11-15 11:20:46.370281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.469 [2024-11-15 11:20:46.370439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.469 [2024-11-15 11:20:46.370571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:03.469 [2024-11-15 11:20:46.370641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:03.469 pt1 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.469 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.727 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.727 "name": "raid_bdev1", 00:09:03.727 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:03.727 "strip_size_kb": 0, 00:09:03.727 "state": "configuring", 00:09:03.727 "raid_level": "raid1", 00:09:03.727 "superblock": true, 00:09:03.727 "num_base_bdevs": 2, 00:09:03.727 "num_base_bdevs_discovered": 1, 00:09:03.728 "num_base_bdevs_operational": 2, 00:09:03.728 "base_bdevs_list": [ 00:09:03.728 { 00:09:03.728 "name": "pt1", 00:09:03.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.728 "is_configured": true, 00:09:03.728 "data_offset": 2048, 00:09:03.728 "data_size": 63488 00:09:03.728 }, 00:09:03.728 { 00:09:03.728 "name": null, 00:09:03.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.728 "is_configured": false, 00:09:03.728 "data_offset": 2048, 00:09:03.728 "data_size": 63488 00:09:03.728 } 00:09:03.728 ] 00:09:03.728 }' 00:09:03.728 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.728 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.986 [2024-11-15 11:20:46.875089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.986 [2024-11-15 11:20:46.875206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.986 [2024-11-15 11:20:46.875243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:03.986 [2024-11-15 11:20:46.875262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.986 [2024-11-15 11:20:46.875952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.986 [2024-11-15 11:20:46.876137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.986 [2024-11-15 11:20:46.876290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:03.986 [2024-11-15 11:20:46.876332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.986 [2024-11-15 11:20:46.876531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:03.986 [2024-11-15 11:20:46.876572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:03.986 [2024-11-15 11:20:46.876935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:03.986 [2024-11-15 11:20:46.877135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:03.986 [2024-11-15 11:20:46.877150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:03.986 [2024-11-15 11:20:46.877444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.986 pt2 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.986 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.987 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.987 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.987 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.987 "name": "raid_bdev1", 00:09:03.987 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:03.987 "strip_size_kb": 0, 00:09:03.987 "state": "online", 00:09:03.987 "raid_level": "raid1", 00:09:03.987 "superblock": true, 00:09:03.987 "num_base_bdevs": 2, 00:09:03.987 "num_base_bdevs_discovered": 2, 00:09:03.987 "num_base_bdevs_operational": 2, 00:09:03.987 "base_bdevs_list": [ 00:09:03.987 { 00:09:03.987 "name": "pt1", 00:09:03.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.987 "is_configured": true, 00:09:03.987 "data_offset": 2048, 00:09:03.987 "data_size": 63488 00:09:03.987 }, 00:09:03.987 { 00:09:03.987 "name": "pt2", 00:09:03.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.987 "is_configured": true, 00:09:03.987 "data_offset": 2048, 00:09:03.987 "data_size": 63488 00:09:03.987 } 00:09:03.987 ] 00:09:03.987 }' 00:09:03.987 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.987 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.552 [2024-11-15 11:20:47.375692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.552 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.552 "name": "raid_bdev1", 00:09:04.552 "aliases": [ 00:09:04.552 "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4" 00:09:04.552 ], 00:09:04.552 "product_name": "Raid Volume", 00:09:04.552 "block_size": 512, 00:09:04.552 "num_blocks": 63488, 00:09:04.552 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:04.552 "assigned_rate_limits": { 00:09:04.552 "rw_ios_per_sec": 0, 00:09:04.552 "rw_mbytes_per_sec": 0, 00:09:04.552 "r_mbytes_per_sec": 0, 00:09:04.552 "w_mbytes_per_sec": 0 00:09:04.552 }, 00:09:04.552 "claimed": false, 00:09:04.552 "zoned": false, 00:09:04.552 "supported_io_types": { 00:09:04.552 "read": true, 00:09:04.552 "write": true, 00:09:04.552 "unmap": false, 00:09:04.552 "flush": false, 00:09:04.552 "reset": true, 00:09:04.552 "nvme_admin": false, 00:09:04.552 "nvme_io": false, 00:09:04.552 "nvme_io_md": false, 00:09:04.552 "write_zeroes": true, 00:09:04.552 "zcopy": false, 00:09:04.552 "get_zone_info": false, 00:09:04.552 "zone_management": false, 00:09:04.552 "zone_append": false, 00:09:04.552 "compare": false, 00:09:04.552 "compare_and_write": false, 00:09:04.552 "abort": false, 00:09:04.552 "seek_hole": false, 00:09:04.552 "seek_data": false, 00:09:04.552 "copy": false, 00:09:04.552 "nvme_iov_md": false 00:09:04.552 }, 00:09:04.552 "memory_domains": [ 00:09:04.552 { 00:09:04.552 "dma_device_id": "system", 00:09:04.552 "dma_device_type": 1 00:09:04.552 }, 00:09:04.552 { 00:09:04.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.552 "dma_device_type": 2 00:09:04.552 }, 00:09:04.552 { 00:09:04.552 "dma_device_id": "system", 00:09:04.552 "dma_device_type": 1 00:09:04.552 }, 00:09:04.552 { 00:09:04.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.552 "dma_device_type": 2 00:09:04.552 } 00:09:04.552 ], 00:09:04.552 "driver_specific": { 00:09:04.552 "raid": { 00:09:04.552 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:04.552 "strip_size_kb": 0, 00:09:04.552 "state": "online", 00:09:04.552 "raid_level": "raid1", 00:09:04.552 "superblock": true, 00:09:04.552 "num_base_bdevs": 2, 00:09:04.552 "num_base_bdevs_discovered": 2, 00:09:04.552 "num_base_bdevs_operational": 2, 00:09:04.552 "base_bdevs_list": [ 00:09:04.552 { 00:09:04.552 "name": "pt1", 00:09:04.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.552 "is_configured": true, 00:09:04.552 "data_offset": 2048, 00:09:04.552 "data_size": 63488 00:09:04.552 }, 00:09:04.552 { 00:09:04.552 "name": "pt2", 00:09:04.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.552 "is_configured": true, 00:09:04.552 "data_offset": 2048, 00:09:04.552 "data_size": 63488 00:09:04.552 } 00:09:04.552 ] 00:09:04.552 } 00:09:04.553 } 00:09:04.553 }' 00:09:04.553 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.553 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:04.553 pt2' 00:09:04.553 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.811 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.812 [2024-11-15 11:20:47.651757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4eaff14b-3ad9-4cb8-9df7-0248c095c2e4 '!=' 4eaff14b-3ad9-4cb8-9df7-0248c095c2e4 ']' 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.812 [2024-11-15 11:20:47.695592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.812 "name": "raid_bdev1", 00:09:04.812 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:04.812 "strip_size_kb": 0, 00:09:04.812 "state": "online", 00:09:04.812 "raid_level": "raid1", 00:09:04.812 "superblock": true, 00:09:04.812 "num_base_bdevs": 2, 00:09:04.812 "num_base_bdevs_discovered": 1, 00:09:04.812 "num_base_bdevs_operational": 1, 00:09:04.812 "base_bdevs_list": [ 00:09:04.812 { 00:09:04.812 "name": null, 00:09:04.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.812 "is_configured": false, 00:09:04.812 "data_offset": 0, 00:09:04.812 "data_size": 63488 00:09:04.812 }, 00:09:04.812 { 00:09:04.812 "name": "pt2", 00:09:04.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.812 "is_configured": true, 00:09:04.812 "data_offset": 2048, 00:09:04.812 "data_size": 63488 00:09:04.812 } 00:09:04.812 ] 00:09:04.812 }' 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.812 11:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.379 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.379 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.379 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.379 [2024-11-15 11:20:48.227817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.379 [2024-11-15 11:20:48.227862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.379 [2024-11-15 11:20:48.227999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.379 [2024-11-15 11:20:48.228105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.379 [2024-11-15 11:20:48.228122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.380 [2024-11-15 11:20:48.303800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.380 [2024-11-15 11:20:48.303863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.380 [2024-11-15 11:20:48.303887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:05.380 [2024-11-15 11:20:48.303904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.380 [2024-11-15 11:20:48.306947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.380 [2024-11-15 11:20:48.307207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.380 [2024-11-15 11:20:48.307324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.380 [2024-11-15 11:20:48.307390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.380 [2024-11-15 11:20:48.307565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.380 [2024-11-15 11:20:48.307593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:05.380 [2024-11-15 11:20:48.307916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:05.380 [2024-11-15 11:20:48.308167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.380 [2024-11-15 11:20:48.308193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:05.380 [2024-11-15 11:20:48.308421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.380 pt2 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.380 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.638 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.638 "name": "raid_bdev1", 00:09:05.638 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:05.638 "strip_size_kb": 0, 00:09:05.638 "state": "online", 00:09:05.638 "raid_level": "raid1", 00:09:05.639 "superblock": true, 00:09:05.639 "num_base_bdevs": 2, 00:09:05.639 "num_base_bdevs_discovered": 1, 00:09:05.639 "num_base_bdevs_operational": 1, 00:09:05.639 "base_bdevs_list": [ 00:09:05.639 { 00:09:05.639 "name": null, 00:09:05.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.639 "is_configured": false, 00:09:05.639 "data_offset": 2048, 00:09:05.639 "data_size": 63488 00:09:05.639 }, 00:09:05.639 { 00:09:05.639 "name": "pt2", 00:09:05.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.639 "is_configured": true, 00:09:05.639 "data_offset": 2048, 00:09:05.639 "data_size": 63488 00:09:05.639 } 00:09:05.639 ] 00:09:05.639 }' 00:09:05.639 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.639 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.897 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.897 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.897 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.897 [2024-11-15 11:20:48.844604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.897 [2024-11-15 11:20:48.844660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.897 [2024-11-15 11:20:48.844781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.897 [2024-11-15 11:20:48.844859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.897 [2024-11-15 11:20:48.844876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.156 [2024-11-15 11:20:48.908591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:06.156 [2024-11-15 11:20:48.908662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.156 [2024-11-15 11:20:48.908694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:06.156 [2024-11-15 11:20:48.908709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.156 [2024-11-15 11:20:48.911895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.156 [2024-11-15 11:20:48.911934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:06.156 [2024-11-15 11:20:48.912030] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:06.156 [2024-11-15 11:20:48.912081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:06.156 [2024-11-15 11:20:48.912315] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:06.156 [2024-11-15 11:20:48.912332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.156 [2024-11-15 11:20:48.912352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:06.156 [2024-11-15 11:20:48.912409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.156 [2024-11-15 11:20:48.912502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:06.156 [2024-11-15 11:20:48.912515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:06.156 [2024-11-15 11:20:48.912919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:06.156 [2024-11-15 11:20:48.913131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:06.156 [2024-11-15 11:20:48.913153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:06.156 [2024-11-15 11:20:48.913397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.156 pt1 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.156 "name": "raid_bdev1", 00:09:06.156 "uuid": "4eaff14b-3ad9-4cb8-9df7-0248c095c2e4", 00:09:06.156 "strip_size_kb": 0, 00:09:06.156 "state": "online", 00:09:06.156 "raid_level": "raid1", 00:09:06.156 "superblock": true, 00:09:06.156 "num_base_bdevs": 2, 00:09:06.156 "num_base_bdevs_discovered": 1, 00:09:06.156 "num_base_bdevs_operational": 1, 00:09:06.156 "base_bdevs_list": [ 00:09:06.156 { 00:09:06.156 "name": null, 00:09:06.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.156 "is_configured": false, 00:09:06.156 "data_offset": 2048, 00:09:06.156 "data_size": 63488 00:09:06.156 }, 00:09:06.156 { 00:09:06.156 "name": "pt2", 00:09:06.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.156 "is_configured": true, 00:09:06.156 "data_offset": 2048, 00:09:06.156 "data_size": 63488 00:09:06.156 } 00:09:06.156 ] 00:09:06.156 }' 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.156 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:06.725 [2024-11-15 11:20:49.457060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4eaff14b-3ad9-4cb8-9df7-0248c095c2e4 '!=' 4eaff14b-3ad9-4cb8-9df7-0248c095c2e4 ']' 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63044 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63044 ']' 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63044 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63044 00:09:06.725 killing process with pid 63044 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63044' 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63044 00:09:06.725 [2024-11-15 11:20:49.534385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.725 11:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63044 00:09:06.725 [2024-11-15 11:20:49.534501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.725 [2024-11-15 11:20:49.534563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.725 [2024-11-15 11:20:49.534585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:06.983 [2024-11-15 11:20:49.725534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.920 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:07.920 00:09:07.920 real 0m6.666s 00:09:07.920 user 0m10.412s 00:09:07.920 sys 0m0.997s 00:09:07.920 11:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.920 ************************************ 00:09:07.920 END TEST raid_superblock_test 00:09:07.920 ************************************ 00:09:07.920 11:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.180 11:20:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:08.180 11:20:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:08.180 11:20:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.180 11:20:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.180 ************************************ 00:09:08.180 START TEST raid_read_error_test 00:09:08.180 ************************************ 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.059drigEyR 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63379 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63379 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63379 ']' 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:08.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:08.180 11:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.180 [2024-11-15 11:20:51.033408] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:08.180 [2024-11-15 11:20:51.034247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63379 ] 00:09:08.439 [2024-11-15 11:20:51.220598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.439 [2024-11-15 11:20:51.359022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.699 [2024-11-15 11:20:51.567976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.699 [2024-11-15 11:20:51.568053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.267 11:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.267 11:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:09.267 11:20:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.267 11:20:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:09.267 11:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.267 11:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.267 BaseBdev1_malloc 00:09:09.267 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.267 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:09.267 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.267 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.267 true 00:09:09.267 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.267 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:09.267 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.267 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.267 [2024-11-15 11:20:52.018066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:09.267 [2024-11-15 11:20:52.018140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.267 [2024-11-15 11:20:52.018187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:09.268 [2024-11-15 11:20:52.018211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.268 [2024-11-15 11:20:52.021086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.268 [2024-11-15 11:20:52.021146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:09.268 BaseBdev1 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.268 BaseBdev2_malloc 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.268 true 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.268 [2024-11-15 11:20:52.078874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:09.268 [2024-11-15 11:20:52.078936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.268 [2024-11-15 11:20:52.078959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:09.268 [2024-11-15 11:20:52.078974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.268 [2024-11-15 11:20:52.081665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.268 [2024-11-15 11:20:52.081706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:09.268 BaseBdev2 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.268 [2024-11-15 11:20:52.086938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.268 [2024-11-15 11:20:52.089358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.268 [2024-11-15 11:20:52.089582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.268 [2024-11-15 11:20:52.089603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:09.268 [2024-11-15 11:20:52.089839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:09.268 [2024-11-15 11:20:52.090094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.268 [2024-11-15 11:20:52.090109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:09.268 [2024-11-15 11:20:52.090363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.268 "name": "raid_bdev1", 00:09:09.268 "uuid": "0028a35f-4259-468f-b2a4-56ed44b9c0ba", 00:09:09.268 "strip_size_kb": 0, 00:09:09.268 "state": "online", 00:09:09.268 "raid_level": "raid1", 00:09:09.268 "superblock": true, 00:09:09.268 "num_base_bdevs": 2, 00:09:09.268 "num_base_bdevs_discovered": 2, 00:09:09.268 "num_base_bdevs_operational": 2, 00:09:09.268 "base_bdevs_list": [ 00:09:09.268 { 00:09:09.268 "name": "BaseBdev1", 00:09:09.268 "uuid": "150c65ca-f5c6-50d6-a1a4-2fbfa3093f56", 00:09:09.268 "is_configured": true, 00:09:09.268 "data_offset": 2048, 00:09:09.268 "data_size": 63488 00:09:09.268 }, 00:09:09.268 { 00:09:09.268 "name": "BaseBdev2", 00:09:09.268 "uuid": "d6c20d6e-f377-5d96-b670-d96ecccdf1df", 00:09:09.268 "is_configured": true, 00:09:09.268 "data_offset": 2048, 00:09:09.268 "data_size": 63488 00:09:09.268 } 00:09:09.268 ] 00:09:09.268 }' 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.268 11:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.836 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:09.836 11:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:09.836 [2024-11-15 11:20:52.732634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.780 "name": "raid_bdev1", 00:09:10.780 "uuid": "0028a35f-4259-468f-b2a4-56ed44b9c0ba", 00:09:10.780 "strip_size_kb": 0, 00:09:10.780 "state": "online", 00:09:10.780 "raid_level": "raid1", 00:09:10.780 "superblock": true, 00:09:10.780 "num_base_bdevs": 2, 00:09:10.780 "num_base_bdevs_discovered": 2, 00:09:10.780 "num_base_bdevs_operational": 2, 00:09:10.780 "base_bdevs_list": [ 00:09:10.780 { 00:09:10.780 "name": "BaseBdev1", 00:09:10.780 "uuid": "150c65ca-f5c6-50d6-a1a4-2fbfa3093f56", 00:09:10.780 "is_configured": true, 00:09:10.780 "data_offset": 2048, 00:09:10.780 "data_size": 63488 00:09:10.780 }, 00:09:10.780 { 00:09:10.780 "name": "BaseBdev2", 00:09:10.780 "uuid": "d6c20d6e-f377-5d96-b670-d96ecccdf1df", 00:09:10.780 "is_configured": true, 00:09:10.780 "data_offset": 2048, 00:09:10.780 "data_size": 63488 00:09:10.780 } 00:09:10.780 ] 00:09:10.780 }' 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.780 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.377 [2024-11-15 11:20:54.167942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.377 [2024-11-15 11:20:54.168153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.377 [2024-11-15 11:20:54.171870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.377 [2024-11-15 11:20:54.172141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.377 [2024-11-15 11:20:54.172433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.377 [2024-11-15 11:20:54.172606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:09:11.377 "results": [ 00:09:11.377 { 00:09:11.377 "job": "raid_bdev1", 00:09:11.377 "core_mask": "0x1", 00:09:11.377 "workload": "randrw", 00:09:11.377 "percentage": 50, 00:09:11.377 "status": "finished", 00:09:11.377 "queue_depth": 1, 00:09:11.377 "io_size": 131072, 00:09:11.377 "runtime": 1.432697, 00:09:11.377 "iops": 12119.101247507324, 00:09:11.377 "mibps": 1514.8876559384155, 00:09:11.377 "io_failed": 0, 00:09:11.377 "io_timeout": 0, 00:09:11.377 "avg_latency_us": 78.17430125711414, 00:09:11.377 "min_latency_us": 38.167272727272724, 00:09:11.377 "max_latency_us": 1995.8690909090908 00:09:11.377 } 00:09:11.377 ], 00:09:11.377 "core_count": 1 00:09:11.377 } 00:09:11.377 te offline 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63379 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63379 ']' 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63379 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63379 00:09:11.377 killing process with pid 63379 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63379' 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63379 00:09:11.377 [2024-11-15 11:20:54.210649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.377 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63379 00:09:11.635 [2024-11-15 11:20:54.331871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.059drigEyR 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:12.569 ************************************ 00:09:12.569 END TEST raid_read_error_test 00:09:12.569 ************************************ 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:12.569 00:09:12.569 real 0m4.541s 00:09:12.569 user 0m5.615s 00:09:12.569 sys 0m0.616s 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.569 11:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.569 11:20:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:12.569 11:20:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:12.569 11:20:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:12.569 11:20:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.569 ************************************ 00:09:12.569 START TEST raid_write_error_test 00:09:12.569 ************************************ 00:09:12.569 11:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:09:12.569 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:12.569 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:12.569 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:12.569 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:12.569 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.570 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:12.570 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.570 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.570 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:12.570 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.570 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hvPG4Qf1gN 00:09:12.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63525 00:09:12.827 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63525 00:09:12.828 11:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63525 ']' 00:09:12.828 11:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:12.828 11:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.828 11:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:12.828 11:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.828 11:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:12.828 11:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.828 [2024-11-15 11:20:55.636528] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:12.828 [2024-11-15 11:20:55.636732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63525 ] 00:09:13.086 [2024-11-15 11:20:55.828157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.086 [2024-11-15 11:20:56.002215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.345 [2024-11-15 11:20:56.235489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.345 [2024-11-15 11:20:56.235568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 BaseBdev1_malloc 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 true 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 [2024-11-15 11:20:56.690608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.912 [2024-11-15 11:20:56.690692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.912 [2024-11-15 11:20:56.690721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.912 [2024-11-15 11:20:56.690737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.912 [2024-11-15 11:20:56.693707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.912 [2024-11-15 11:20:56.693770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.912 BaseBdev1 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 BaseBdev2_malloc 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 true 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 [2024-11-15 11:20:56.754235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:13.912 [2024-11-15 11:20:56.754321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.912 [2024-11-15 11:20:56.754347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:13.912 [2024-11-15 11:20:56.754365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.912 [2024-11-15 11:20:56.757283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.912 [2024-11-15 11:20:56.757347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:13.912 BaseBdev2 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.912 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 [2024-11-15 11:20:56.762301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.912 [2024-11-15 11:20:56.764789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.913 [2024-11-15 11:20:56.765027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:13.913 [2024-11-15 11:20:56.765049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:13.913 [2024-11-15 11:20:56.765366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:13.913 [2024-11-15 11:20:56.765660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:13.913 [2024-11-15 11:20:56.765685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:13.913 [2024-11-15 11:20:56.765875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.913 "name": "raid_bdev1", 00:09:13.913 "uuid": "5ead97b6-f4a4-46ef-89de-427fb70b27ad", 00:09:13.913 "strip_size_kb": 0, 00:09:13.913 "state": "online", 00:09:13.913 "raid_level": "raid1", 00:09:13.913 "superblock": true, 00:09:13.913 "num_base_bdevs": 2, 00:09:13.913 "num_base_bdevs_discovered": 2, 00:09:13.913 "num_base_bdevs_operational": 2, 00:09:13.913 "base_bdevs_list": [ 00:09:13.913 { 00:09:13.913 "name": "BaseBdev1", 00:09:13.913 "uuid": "aa60b056-7be2-563b-a25b-01b70303256d", 00:09:13.913 "is_configured": true, 00:09:13.913 "data_offset": 2048, 00:09:13.913 "data_size": 63488 00:09:13.913 }, 00:09:13.913 { 00:09:13.913 "name": "BaseBdev2", 00:09:13.913 "uuid": "fadf7032-f1c9-5781-879d-c1c3c4efce99", 00:09:13.913 "is_configured": true, 00:09:13.913 "data_offset": 2048, 00:09:13.913 "data_size": 63488 00:09:13.913 } 00:09:13.913 ] 00:09:13.913 }' 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.913 11:20:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.480 11:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:14.480 11:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.739 [2024-11-15 11:20:57.432262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:15.675 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:15.675 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.675 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.675 [2024-11-15 11:20:58.294516] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:15.675 [2024-11-15 11:20:58.294744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.675 [2024-11-15 11:20:58.294999] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:15.675 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.675 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.675 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:15.675 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:15.675 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.676 "name": "raid_bdev1", 00:09:15.676 "uuid": "5ead97b6-f4a4-46ef-89de-427fb70b27ad", 00:09:15.676 "strip_size_kb": 0, 00:09:15.676 "state": "online", 00:09:15.676 "raid_level": "raid1", 00:09:15.676 "superblock": true, 00:09:15.676 "num_base_bdevs": 2, 00:09:15.676 "num_base_bdevs_discovered": 1, 00:09:15.676 "num_base_bdevs_operational": 1, 00:09:15.676 "base_bdevs_list": [ 00:09:15.676 { 00:09:15.676 "name": null, 00:09:15.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.676 "is_configured": false, 00:09:15.676 "data_offset": 0, 00:09:15.676 "data_size": 63488 00:09:15.676 }, 00:09:15.676 { 00:09:15.676 "name": "BaseBdev2", 00:09:15.676 "uuid": "fadf7032-f1c9-5781-879d-c1c3c4efce99", 00:09:15.676 "is_configured": true, 00:09:15.676 "data_offset": 2048, 00:09:15.676 "data_size": 63488 00:09:15.676 } 00:09:15.676 ] 00:09:15.676 }' 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.676 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.935 [2024-11-15 11:20:58.821398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.935 [2024-11-15 11:20:58.821438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.935 [2024-11-15 11:20:58.824810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.935 [2024-11-15 11:20:58.824865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.935 [2024-11-15 11:20:58.824943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.935 [2024-11-15 11:20:58.824960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:15.935 { 00:09:15.935 "results": [ 00:09:15.935 { 00:09:15.935 "job": "raid_bdev1", 00:09:15.935 "core_mask": "0x1", 00:09:15.935 "workload": "randrw", 00:09:15.935 "percentage": 50, 00:09:15.935 "status": "finished", 00:09:15.935 "queue_depth": 1, 00:09:15.935 "io_size": 131072, 00:09:15.935 "runtime": 1.38633, 00:09:15.935 "iops": 13991.618157292996, 00:09:15.935 "mibps": 1748.9522696616245, 00:09:15.935 "io_failed": 0, 00:09:15.935 "io_timeout": 0, 00:09:15.935 "avg_latency_us": 67.08096978445589, 00:09:15.935 "min_latency_us": 37.70181818181818, 00:09:15.935 "max_latency_us": 1623.5054545454545 00:09:15.935 } 00:09:15.935 ], 00:09:15.935 "core_count": 1 00:09:15.935 } 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63525 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63525 ']' 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63525 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63525 00:09:15.935 killing process with pid 63525 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63525' 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63525 00:09:15.935 11:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63525 00:09:15.935 [2024-11-15 11:20:58.864481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.194 [2024-11-15 11:20:58.991688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hvPG4Qf1gN 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:17.571 ************************************ 00:09:17.571 END TEST raid_write_error_test 00:09:17.571 ************************************ 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:17.571 00:09:17.571 real 0m4.587s 00:09:17.571 user 0m5.715s 00:09:17.571 sys 0m0.639s 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.571 11:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.571 11:21:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:17.571 11:21:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:17.571 11:21:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:17.571 11:21:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:17.571 11:21:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.571 11:21:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.571 ************************************ 00:09:17.571 START TEST raid_state_function_test 00:09:17.571 ************************************ 00:09:17.571 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:09:17.571 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:17.571 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:17.571 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:17.571 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:17.571 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:17.571 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:17.572 Process raid pid: 63663 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63663 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63663' 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63663 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63663 ']' 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:17.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:17.572 11:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.572 [2024-11-15 11:21:00.279645] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:17.572 [2024-11-15 11:21:00.279842] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.572 [2024-11-15 11:21:00.473259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.830 [2024-11-15 11:21:00.668407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.088 [2024-11-15 11:21:00.935792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.088 [2024-11-15 11:21:00.935855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.660 [2024-11-15 11:21:01.367051] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.660 [2024-11-15 11:21:01.367302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.660 [2024-11-15 11:21:01.367332] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.660 [2024-11-15 11:21:01.367350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.660 [2024-11-15 11:21:01.367360] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.660 [2024-11-15 11:21:01.367375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.660 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.660 "name": "Existed_Raid", 00:09:18.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.660 "strip_size_kb": 64, 00:09:18.660 "state": "configuring", 00:09:18.660 "raid_level": "raid0", 00:09:18.660 "superblock": false, 00:09:18.660 "num_base_bdevs": 3, 00:09:18.660 "num_base_bdevs_discovered": 0, 00:09:18.660 "num_base_bdevs_operational": 3, 00:09:18.660 "base_bdevs_list": [ 00:09:18.660 { 00:09:18.660 "name": "BaseBdev1", 00:09:18.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.660 "is_configured": false, 00:09:18.660 "data_offset": 0, 00:09:18.660 "data_size": 0 00:09:18.660 }, 00:09:18.660 { 00:09:18.660 "name": "BaseBdev2", 00:09:18.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.661 "is_configured": false, 00:09:18.661 "data_offset": 0, 00:09:18.661 "data_size": 0 00:09:18.661 }, 00:09:18.661 { 00:09:18.661 "name": "BaseBdev3", 00:09:18.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.661 "is_configured": false, 00:09:18.661 "data_offset": 0, 00:09:18.661 "data_size": 0 00:09:18.661 } 00:09:18.661 ] 00:09:18.661 }' 00:09:18.661 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.661 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 [2024-11-15 11:21:01.895152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.230 [2024-11-15 11:21:01.895425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 [2024-11-15 11:21:01.907111] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.230 [2024-11-15 11:21:01.907365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.230 [2024-11-15 11:21:01.907392] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.230 [2024-11-15 11:21:01.907414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.230 [2024-11-15 11:21:01.907424] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.230 [2024-11-15 11:21:01.907438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 [2024-11-15 11:21:01.951390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.230 BaseBdev1 00:09:19.230 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 [ 00:09:19.231 { 00:09:19.231 "name": "BaseBdev1", 00:09:19.231 "aliases": [ 00:09:19.231 "554e09b7-d4f8-4fb6-a323-a90fe9382124" 00:09:19.231 ], 00:09:19.231 "product_name": "Malloc disk", 00:09:19.231 "block_size": 512, 00:09:19.231 "num_blocks": 65536, 00:09:19.231 "uuid": "554e09b7-d4f8-4fb6-a323-a90fe9382124", 00:09:19.231 "assigned_rate_limits": { 00:09:19.231 "rw_ios_per_sec": 0, 00:09:19.231 "rw_mbytes_per_sec": 0, 00:09:19.231 "r_mbytes_per_sec": 0, 00:09:19.231 "w_mbytes_per_sec": 0 00:09:19.231 }, 00:09:19.231 "claimed": true, 00:09:19.231 "claim_type": "exclusive_write", 00:09:19.231 "zoned": false, 00:09:19.231 "supported_io_types": { 00:09:19.231 "read": true, 00:09:19.231 "write": true, 00:09:19.231 "unmap": true, 00:09:19.231 "flush": true, 00:09:19.231 "reset": true, 00:09:19.231 "nvme_admin": false, 00:09:19.231 "nvme_io": false, 00:09:19.231 "nvme_io_md": false, 00:09:19.231 "write_zeroes": true, 00:09:19.231 "zcopy": true, 00:09:19.231 "get_zone_info": false, 00:09:19.231 "zone_management": false, 00:09:19.231 "zone_append": false, 00:09:19.231 "compare": false, 00:09:19.231 "compare_and_write": false, 00:09:19.231 "abort": true, 00:09:19.231 "seek_hole": false, 00:09:19.231 "seek_data": false, 00:09:19.231 "copy": true, 00:09:19.231 "nvme_iov_md": false 00:09:19.231 }, 00:09:19.231 "memory_domains": [ 00:09:19.231 { 00:09:19.231 "dma_device_id": "system", 00:09:19.231 "dma_device_type": 1 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.231 "dma_device_type": 2 00:09:19.231 } 00:09:19.231 ], 00:09:19.231 "driver_specific": {} 00:09:19.231 } 00:09:19.231 ] 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.231 11:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.231 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.231 "name": "Existed_Raid", 00:09:19.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.231 "strip_size_kb": 64, 00:09:19.231 "state": "configuring", 00:09:19.231 "raid_level": "raid0", 00:09:19.231 "superblock": false, 00:09:19.231 "num_base_bdevs": 3, 00:09:19.231 "num_base_bdevs_discovered": 1, 00:09:19.231 "num_base_bdevs_operational": 3, 00:09:19.231 "base_bdevs_list": [ 00:09:19.231 { 00:09:19.231 "name": "BaseBdev1", 00:09:19.231 "uuid": "554e09b7-d4f8-4fb6-a323-a90fe9382124", 00:09:19.231 "is_configured": true, 00:09:19.231 "data_offset": 0, 00:09:19.231 "data_size": 65536 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "name": "BaseBdev2", 00:09:19.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.231 "is_configured": false, 00:09:19.231 "data_offset": 0, 00:09:19.231 "data_size": 0 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "name": "BaseBdev3", 00:09:19.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.231 "is_configured": false, 00:09:19.231 "data_offset": 0, 00:09:19.231 "data_size": 0 00:09:19.231 } 00:09:19.231 ] 00:09:19.231 }' 00:09:19.231 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.231 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.799 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.799 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.799 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.799 [2024-11-15 11:21:02.503645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.799 [2024-11-15 11:21:02.503710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:19.799 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.799 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.799 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.799 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.799 [2024-11-15 11:21:02.515672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.799 [2024-11-15 11:21:02.518628] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.800 [2024-11-15 11:21:02.518861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.800 [2024-11-15 11:21:02.518982] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.800 [2024-11-15 11:21:02.519040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.800 "name": "Existed_Raid", 00:09:19.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.800 "strip_size_kb": 64, 00:09:19.800 "state": "configuring", 00:09:19.800 "raid_level": "raid0", 00:09:19.800 "superblock": false, 00:09:19.800 "num_base_bdevs": 3, 00:09:19.800 "num_base_bdevs_discovered": 1, 00:09:19.800 "num_base_bdevs_operational": 3, 00:09:19.800 "base_bdevs_list": [ 00:09:19.800 { 00:09:19.800 "name": "BaseBdev1", 00:09:19.800 "uuid": "554e09b7-d4f8-4fb6-a323-a90fe9382124", 00:09:19.800 "is_configured": true, 00:09:19.800 "data_offset": 0, 00:09:19.800 "data_size": 65536 00:09:19.800 }, 00:09:19.800 { 00:09:19.800 "name": "BaseBdev2", 00:09:19.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.800 "is_configured": false, 00:09:19.800 "data_offset": 0, 00:09:19.800 "data_size": 0 00:09:19.800 }, 00:09:19.800 { 00:09:19.800 "name": "BaseBdev3", 00:09:19.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.800 "is_configured": false, 00:09:19.800 "data_offset": 0, 00:09:19.800 "data_size": 0 00:09:19.800 } 00:09:19.800 ] 00:09:19.800 }' 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.800 11:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.367 [2024-11-15 11:21:03.073498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.367 BaseBdev2 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.367 [ 00:09:20.367 { 00:09:20.367 "name": "BaseBdev2", 00:09:20.367 "aliases": [ 00:09:20.367 "b9da1073-6508-446e-ae18-278ef778bcce" 00:09:20.367 ], 00:09:20.367 "product_name": "Malloc disk", 00:09:20.367 "block_size": 512, 00:09:20.367 "num_blocks": 65536, 00:09:20.367 "uuid": "b9da1073-6508-446e-ae18-278ef778bcce", 00:09:20.367 "assigned_rate_limits": { 00:09:20.367 "rw_ios_per_sec": 0, 00:09:20.367 "rw_mbytes_per_sec": 0, 00:09:20.367 "r_mbytes_per_sec": 0, 00:09:20.367 "w_mbytes_per_sec": 0 00:09:20.367 }, 00:09:20.367 "claimed": true, 00:09:20.367 "claim_type": "exclusive_write", 00:09:20.367 "zoned": false, 00:09:20.367 "supported_io_types": { 00:09:20.367 "read": true, 00:09:20.367 "write": true, 00:09:20.367 "unmap": true, 00:09:20.367 "flush": true, 00:09:20.367 "reset": true, 00:09:20.367 "nvme_admin": false, 00:09:20.367 "nvme_io": false, 00:09:20.367 "nvme_io_md": false, 00:09:20.367 "write_zeroes": true, 00:09:20.367 "zcopy": true, 00:09:20.367 "get_zone_info": false, 00:09:20.367 "zone_management": false, 00:09:20.367 "zone_append": false, 00:09:20.367 "compare": false, 00:09:20.367 "compare_and_write": false, 00:09:20.367 "abort": true, 00:09:20.367 "seek_hole": false, 00:09:20.367 "seek_data": false, 00:09:20.367 "copy": true, 00:09:20.367 "nvme_iov_md": false 00:09:20.367 }, 00:09:20.367 "memory_domains": [ 00:09:20.367 { 00:09:20.367 "dma_device_id": "system", 00:09:20.367 "dma_device_type": 1 00:09:20.367 }, 00:09:20.367 { 00:09:20.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.367 "dma_device_type": 2 00:09:20.367 } 00:09:20.367 ], 00:09:20.367 "driver_specific": {} 00:09:20.367 } 00:09:20.367 ] 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.367 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.368 "name": "Existed_Raid", 00:09:20.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.368 "strip_size_kb": 64, 00:09:20.368 "state": "configuring", 00:09:20.368 "raid_level": "raid0", 00:09:20.368 "superblock": false, 00:09:20.368 "num_base_bdevs": 3, 00:09:20.368 "num_base_bdevs_discovered": 2, 00:09:20.368 "num_base_bdevs_operational": 3, 00:09:20.368 "base_bdevs_list": [ 00:09:20.368 { 00:09:20.368 "name": "BaseBdev1", 00:09:20.368 "uuid": "554e09b7-d4f8-4fb6-a323-a90fe9382124", 00:09:20.368 "is_configured": true, 00:09:20.368 "data_offset": 0, 00:09:20.368 "data_size": 65536 00:09:20.368 }, 00:09:20.368 { 00:09:20.368 "name": "BaseBdev2", 00:09:20.368 "uuid": "b9da1073-6508-446e-ae18-278ef778bcce", 00:09:20.368 "is_configured": true, 00:09:20.368 "data_offset": 0, 00:09:20.368 "data_size": 65536 00:09:20.368 }, 00:09:20.368 { 00:09:20.368 "name": "BaseBdev3", 00:09:20.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.368 "is_configured": false, 00:09:20.368 "data_offset": 0, 00:09:20.368 "data_size": 0 00:09:20.368 } 00:09:20.368 ] 00:09:20.368 }' 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.368 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.935 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:20.935 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.935 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.935 [2024-11-15 11:21:03.713970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.935 [2024-11-15 11:21:03.714029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.935 [2024-11-15 11:21:03.714050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:20.936 [2024-11-15 11:21:03.714461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:20.936 [2024-11-15 11:21:03.714715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.936 [2024-11-15 11:21:03.714732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:20.936 [2024-11-15 11:21:03.715064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.936 BaseBdev3 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.936 [ 00:09:20.936 { 00:09:20.936 "name": "BaseBdev3", 00:09:20.936 "aliases": [ 00:09:20.936 "d3d854a6-5cb6-4dab-8d58-8192bc23ebe4" 00:09:20.936 ], 00:09:20.936 "product_name": "Malloc disk", 00:09:20.936 "block_size": 512, 00:09:20.936 "num_blocks": 65536, 00:09:20.936 "uuid": "d3d854a6-5cb6-4dab-8d58-8192bc23ebe4", 00:09:20.936 "assigned_rate_limits": { 00:09:20.936 "rw_ios_per_sec": 0, 00:09:20.936 "rw_mbytes_per_sec": 0, 00:09:20.936 "r_mbytes_per_sec": 0, 00:09:20.936 "w_mbytes_per_sec": 0 00:09:20.936 }, 00:09:20.936 "claimed": true, 00:09:20.936 "claim_type": "exclusive_write", 00:09:20.936 "zoned": false, 00:09:20.936 "supported_io_types": { 00:09:20.936 "read": true, 00:09:20.936 "write": true, 00:09:20.936 "unmap": true, 00:09:20.936 "flush": true, 00:09:20.936 "reset": true, 00:09:20.936 "nvme_admin": false, 00:09:20.936 "nvme_io": false, 00:09:20.936 "nvme_io_md": false, 00:09:20.936 "write_zeroes": true, 00:09:20.936 "zcopy": true, 00:09:20.936 "get_zone_info": false, 00:09:20.936 "zone_management": false, 00:09:20.936 "zone_append": false, 00:09:20.936 "compare": false, 00:09:20.936 "compare_and_write": false, 00:09:20.936 "abort": true, 00:09:20.936 "seek_hole": false, 00:09:20.936 "seek_data": false, 00:09:20.936 "copy": true, 00:09:20.936 "nvme_iov_md": false 00:09:20.936 }, 00:09:20.936 "memory_domains": [ 00:09:20.936 { 00:09:20.936 "dma_device_id": "system", 00:09:20.936 "dma_device_type": 1 00:09:20.936 }, 00:09:20.936 { 00:09:20.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.936 "dma_device_type": 2 00:09:20.936 } 00:09:20.936 ], 00:09:20.936 "driver_specific": {} 00:09:20.936 } 00:09:20.936 ] 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.936 "name": "Existed_Raid", 00:09:20.936 "uuid": "35d32e20-0216-4620-aefe-ed381639b3df", 00:09:20.936 "strip_size_kb": 64, 00:09:20.936 "state": "online", 00:09:20.936 "raid_level": "raid0", 00:09:20.936 "superblock": false, 00:09:20.936 "num_base_bdevs": 3, 00:09:20.936 "num_base_bdevs_discovered": 3, 00:09:20.936 "num_base_bdevs_operational": 3, 00:09:20.936 "base_bdevs_list": [ 00:09:20.936 { 00:09:20.936 "name": "BaseBdev1", 00:09:20.936 "uuid": "554e09b7-d4f8-4fb6-a323-a90fe9382124", 00:09:20.936 "is_configured": true, 00:09:20.936 "data_offset": 0, 00:09:20.936 "data_size": 65536 00:09:20.936 }, 00:09:20.936 { 00:09:20.936 "name": "BaseBdev2", 00:09:20.936 "uuid": "b9da1073-6508-446e-ae18-278ef778bcce", 00:09:20.936 "is_configured": true, 00:09:20.936 "data_offset": 0, 00:09:20.936 "data_size": 65536 00:09:20.936 }, 00:09:20.936 { 00:09:20.936 "name": "BaseBdev3", 00:09:20.936 "uuid": "d3d854a6-5cb6-4dab-8d58-8192bc23ebe4", 00:09:20.936 "is_configured": true, 00:09:20.936 "data_offset": 0, 00:09:20.936 "data_size": 65536 00:09:20.936 } 00:09:20.936 ] 00:09:20.936 }' 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.936 11:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.503 [2024-11-15 11:21:04.290686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.503 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.503 "name": "Existed_Raid", 00:09:21.503 "aliases": [ 00:09:21.503 "35d32e20-0216-4620-aefe-ed381639b3df" 00:09:21.503 ], 00:09:21.503 "product_name": "Raid Volume", 00:09:21.503 "block_size": 512, 00:09:21.503 "num_blocks": 196608, 00:09:21.503 "uuid": "35d32e20-0216-4620-aefe-ed381639b3df", 00:09:21.503 "assigned_rate_limits": { 00:09:21.503 "rw_ios_per_sec": 0, 00:09:21.503 "rw_mbytes_per_sec": 0, 00:09:21.503 "r_mbytes_per_sec": 0, 00:09:21.503 "w_mbytes_per_sec": 0 00:09:21.503 }, 00:09:21.503 "claimed": false, 00:09:21.503 "zoned": false, 00:09:21.503 "supported_io_types": { 00:09:21.503 "read": true, 00:09:21.503 "write": true, 00:09:21.503 "unmap": true, 00:09:21.503 "flush": true, 00:09:21.503 "reset": true, 00:09:21.503 "nvme_admin": false, 00:09:21.503 "nvme_io": false, 00:09:21.503 "nvme_io_md": false, 00:09:21.503 "write_zeroes": true, 00:09:21.503 "zcopy": false, 00:09:21.503 "get_zone_info": false, 00:09:21.503 "zone_management": false, 00:09:21.503 "zone_append": false, 00:09:21.503 "compare": false, 00:09:21.503 "compare_and_write": false, 00:09:21.503 "abort": false, 00:09:21.503 "seek_hole": false, 00:09:21.503 "seek_data": false, 00:09:21.503 "copy": false, 00:09:21.503 "nvme_iov_md": false 00:09:21.503 }, 00:09:21.503 "memory_domains": [ 00:09:21.503 { 00:09:21.503 "dma_device_id": "system", 00:09:21.503 "dma_device_type": 1 00:09:21.503 }, 00:09:21.503 { 00:09:21.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.503 "dma_device_type": 2 00:09:21.503 }, 00:09:21.503 { 00:09:21.503 "dma_device_id": "system", 00:09:21.503 "dma_device_type": 1 00:09:21.504 }, 00:09:21.504 { 00:09:21.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.504 "dma_device_type": 2 00:09:21.504 }, 00:09:21.504 { 00:09:21.504 "dma_device_id": "system", 00:09:21.504 "dma_device_type": 1 00:09:21.504 }, 00:09:21.504 { 00:09:21.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.504 "dma_device_type": 2 00:09:21.504 } 00:09:21.504 ], 00:09:21.504 "driver_specific": { 00:09:21.504 "raid": { 00:09:21.504 "uuid": "35d32e20-0216-4620-aefe-ed381639b3df", 00:09:21.504 "strip_size_kb": 64, 00:09:21.504 "state": "online", 00:09:21.504 "raid_level": "raid0", 00:09:21.504 "superblock": false, 00:09:21.504 "num_base_bdevs": 3, 00:09:21.504 "num_base_bdevs_discovered": 3, 00:09:21.504 "num_base_bdevs_operational": 3, 00:09:21.504 "base_bdevs_list": [ 00:09:21.504 { 00:09:21.504 "name": "BaseBdev1", 00:09:21.504 "uuid": "554e09b7-d4f8-4fb6-a323-a90fe9382124", 00:09:21.504 "is_configured": true, 00:09:21.504 "data_offset": 0, 00:09:21.504 "data_size": 65536 00:09:21.504 }, 00:09:21.504 { 00:09:21.504 "name": "BaseBdev2", 00:09:21.504 "uuid": "b9da1073-6508-446e-ae18-278ef778bcce", 00:09:21.504 "is_configured": true, 00:09:21.504 "data_offset": 0, 00:09:21.504 "data_size": 65536 00:09:21.504 }, 00:09:21.504 { 00:09:21.504 "name": "BaseBdev3", 00:09:21.504 "uuid": "d3d854a6-5cb6-4dab-8d58-8192bc23ebe4", 00:09:21.504 "is_configured": true, 00:09:21.504 "data_offset": 0, 00:09:21.504 "data_size": 65536 00:09:21.504 } 00:09:21.504 ] 00:09:21.504 } 00:09:21.504 } 00:09:21.504 }' 00:09:21.504 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.504 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:21.504 BaseBdev2 00:09:21.504 BaseBdev3' 00:09:21.504 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.504 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.504 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.763 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.763 [2024-11-15 11:21:04.622409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.763 [2024-11-15 11:21:04.622647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.763 [2024-11-15 11:21:04.622745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.020 "name": "Existed_Raid", 00:09:22.020 "uuid": "35d32e20-0216-4620-aefe-ed381639b3df", 00:09:22.020 "strip_size_kb": 64, 00:09:22.020 "state": "offline", 00:09:22.020 "raid_level": "raid0", 00:09:22.020 "superblock": false, 00:09:22.020 "num_base_bdevs": 3, 00:09:22.020 "num_base_bdevs_discovered": 2, 00:09:22.020 "num_base_bdevs_operational": 2, 00:09:22.020 "base_bdevs_list": [ 00:09:22.020 { 00:09:22.020 "name": null, 00:09:22.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.020 "is_configured": false, 00:09:22.020 "data_offset": 0, 00:09:22.020 "data_size": 65536 00:09:22.020 }, 00:09:22.020 { 00:09:22.020 "name": "BaseBdev2", 00:09:22.020 "uuid": "b9da1073-6508-446e-ae18-278ef778bcce", 00:09:22.020 "is_configured": true, 00:09:22.020 "data_offset": 0, 00:09:22.020 "data_size": 65536 00:09:22.020 }, 00:09:22.020 { 00:09:22.020 "name": "BaseBdev3", 00:09:22.020 "uuid": "d3d854a6-5cb6-4dab-8d58-8192bc23ebe4", 00:09:22.020 "is_configured": true, 00:09:22.020 "data_offset": 0, 00:09:22.020 "data_size": 65536 00:09:22.020 } 00:09:22.020 ] 00:09:22.020 }' 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.020 11:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.587 [2024-11-15 11:21:05.296602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.587 [2024-11-15 11:21:05.440097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.587 [2024-11-15 11:21:05.440163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.587 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.847 BaseBdev2 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.847 [ 00:09:22.847 { 00:09:22.847 "name": "BaseBdev2", 00:09:22.847 "aliases": [ 00:09:22.847 "31e949d1-9a73-4a8c-ac69-710095bfbbf9" 00:09:22.847 ], 00:09:22.847 "product_name": "Malloc disk", 00:09:22.847 "block_size": 512, 00:09:22.847 "num_blocks": 65536, 00:09:22.847 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:22.847 "assigned_rate_limits": { 00:09:22.847 "rw_ios_per_sec": 0, 00:09:22.847 "rw_mbytes_per_sec": 0, 00:09:22.847 "r_mbytes_per_sec": 0, 00:09:22.847 "w_mbytes_per_sec": 0 00:09:22.847 }, 00:09:22.847 "claimed": false, 00:09:22.847 "zoned": false, 00:09:22.847 "supported_io_types": { 00:09:22.847 "read": true, 00:09:22.847 "write": true, 00:09:22.847 "unmap": true, 00:09:22.847 "flush": true, 00:09:22.847 "reset": true, 00:09:22.847 "nvme_admin": false, 00:09:22.847 "nvme_io": false, 00:09:22.847 "nvme_io_md": false, 00:09:22.847 "write_zeroes": true, 00:09:22.847 "zcopy": true, 00:09:22.847 "get_zone_info": false, 00:09:22.847 "zone_management": false, 00:09:22.847 "zone_append": false, 00:09:22.847 "compare": false, 00:09:22.847 "compare_and_write": false, 00:09:22.847 "abort": true, 00:09:22.847 "seek_hole": false, 00:09:22.847 "seek_data": false, 00:09:22.847 "copy": true, 00:09:22.847 "nvme_iov_md": false 00:09:22.847 }, 00:09:22.847 "memory_domains": [ 00:09:22.847 { 00:09:22.847 "dma_device_id": "system", 00:09:22.847 "dma_device_type": 1 00:09:22.847 }, 00:09:22.847 { 00:09:22.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.847 "dma_device_type": 2 00:09:22.847 } 00:09:22.847 ], 00:09:22.847 "driver_specific": {} 00:09:22.847 } 00:09:22.847 ] 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.847 BaseBdev3 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:22.847 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.848 [ 00:09:22.848 { 00:09:22.848 "name": "BaseBdev3", 00:09:22.848 "aliases": [ 00:09:22.848 "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68" 00:09:22.848 ], 00:09:22.848 "product_name": "Malloc disk", 00:09:22.848 "block_size": 512, 00:09:22.848 "num_blocks": 65536, 00:09:22.848 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:22.848 "assigned_rate_limits": { 00:09:22.848 "rw_ios_per_sec": 0, 00:09:22.848 "rw_mbytes_per_sec": 0, 00:09:22.848 "r_mbytes_per_sec": 0, 00:09:22.848 "w_mbytes_per_sec": 0 00:09:22.848 }, 00:09:22.848 "claimed": false, 00:09:22.848 "zoned": false, 00:09:22.848 "supported_io_types": { 00:09:22.848 "read": true, 00:09:22.848 "write": true, 00:09:22.848 "unmap": true, 00:09:22.848 "flush": true, 00:09:22.848 "reset": true, 00:09:22.848 "nvme_admin": false, 00:09:22.848 "nvme_io": false, 00:09:22.848 "nvme_io_md": false, 00:09:22.848 "write_zeroes": true, 00:09:22.848 "zcopy": true, 00:09:22.848 "get_zone_info": false, 00:09:22.848 "zone_management": false, 00:09:22.848 "zone_append": false, 00:09:22.848 "compare": false, 00:09:22.848 "compare_and_write": false, 00:09:22.848 "abort": true, 00:09:22.848 "seek_hole": false, 00:09:22.848 "seek_data": false, 00:09:22.848 "copy": true, 00:09:22.848 "nvme_iov_md": false 00:09:22.848 }, 00:09:22.848 "memory_domains": [ 00:09:22.848 { 00:09:22.848 "dma_device_id": "system", 00:09:22.848 "dma_device_type": 1 00:09:22.848 }, 00:09:22.848 { 00:09:22.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.848 "dma_device_type": 2 00:09:22.848 } 00:09:22.848 ], 00:09:22.848 "driver_specific": {} 00:09:22.848 } 00:09:22.848 ] 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.848 [2024-11-15 11:21:05.725227] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.848 [2024-11-15 11:21:05.725292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.848 [2024-11-15 11:21:05.725341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.848 [2024-11-15 11:21:05.727735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.848 "name": "Existed_Raid", 00:09:22.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.848 "strip_size_kb": 64, 00:09:22.848 "state": "configuring", 00:09:22.848 "raid_level": "raid0", 00:09:22.848 "superblock": false, 00:09:22.848 "num_base_bdevs": 3, 00:09:22.848 "num_base_bdevs_discovered": 2, 00:09:22.848 "num_base_bdevs_operational": 3, 00:09:22.848 "base_bdevs_list": [ 00:09:22.848 { 00:09:22.848 "name": "BaseBdev1", 00:09:22.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.848 "is_configured": false, 00:09:22.848 "data_offset": 0, 00:09:22.848 "data_size": 0 00:09:22.848 }, 00:09:22.848 { 00:09:22.848 "name": "BaseBdev2", 00:09:22.848 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:22.848 "is_configured": true, 00:09:22.848 "data_offset": 0, 00:09:22.848 "data_size": 65536 00:09:22.848 }, 00:09:22.848 { 00:09:22.848 "name": "BaseBdev3", 00:09:22.848 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:22.848 "is_configured": true, 00:09:22.848 "data_offset": 0, 00:09:22.848 "data_size": 65536 00:09:22.848 } 00:09:22.848 ] 00:09:22.848 }' 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.848 11:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.415 [2024-11-15 11:21:06.241465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.415 "name": "Existed_Raid", 00:09:23.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.415 "strip_size_kb": 64, 00:09:23.415 "state": "configuring", 00:09:23.415 "raid_level": "raid0", 00:09:23.415 "superblock": false, 00:09:23.415 "num_base_bdevs": 3, 00:09:23.415 "num_base_bdevs_discovered": 1, 00:09:23.415 "num_base_bdevs_operational": 3, 00:09:23.415 "base_bdevs_list": [ 00:09:23.415 { 00:09:23.415 "name": "BaseBdev1", 00:09:23.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.415 "is_configured": false, 00:09:23.415 "data_offset": 0, 00:09:23.415 "data_size": 0 00:09:23.415 }, 00:09:23.415 { 00:09:23.415 "name": null, 00:09:23.415 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:23.415 "is_configured": false, 00:09:23.415 "data_offset": 0, 00:09:23.415 "data_size": 65536 00:09:23.415 }, 00:09:23.415 { 00:09:23.415 "name": "BaseBdev3", 00:09:23.415 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:23.415 "is_configured": true, 00:09:23.415 "data_offset": 0, 00:09:23.415 "data_size": 65536 00:09:23.415 } 00:09:23.415 ] 00:09:23.415 }' 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.415 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.984 [2024-11-15 11:21:06.864198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.984 BaseBdev1 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.984 [ 00:09:23.984 { 00:09:23.984 "name": "BaseBdev1", 00:09:23.984 "aliases": [ 00:09:23.984 "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3" 00:09:23.984 ], 00:09:23.984 "product_name": "Malloc disk", 00:09:23.984 "block_size": 512, 00:09:23.984 "num_blocks": 65536, 00:09:23.984 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:23.984 "assigned_rate_limits": { 00:09:23.984 "rw_ios_per_sec": 0, 00:09:23.984 "rw_mbytes_per_sec": 0, 00:09:23.984 "r_mbytes_per_sec": 0, 00:09:23.984 "w_mbytes_per_sec": 0 00:09:23.984 }, 00:09:23.984 "claimed": true, 00:09:23.984 "claim_type": "exclusive_write", 00:09:23.984 "zoned": false, 00:09:23.984 "supported_io_types": { 00:09:23.984 "read": true, 00:09:23.984 "write": true, 00:09:23.984 "unmap": true, 00:09:23.984 "flush": true, 00:09:23.984 "reset": true, 00:09:23.984 "nvme_admin": false, 00:09:23.984 "nvme_io": false, 00:09:23.984 "nvme_io_md": false, 00:09:23.984 "write_zeroes": true, 00:09:23.984 "zcopy": true, 00:09:23.984 "get_zone_info": false, 00:09:23.984 "zone_management": false, 00:09:23.984 "zone_append": false, 00:09:23.984 "compare": false, 00:09:23.984 "compare_and_write": false, 00:09:23.984 "abort": true, 00:09:23.984 "seek_hole": false, 00:09:23.984 "seek_data": false, 00:09:23.984 "copy": true, 00:09:23.984 "nvme_iov_md": false 00:09:23.984 }, 00:09:23.984 "memory_domains": [ 00:09:23.984 { 00:09:23.984 "dma_device_id": "system", 00:09:23.984 "dma_device_type": 1 00:09:23.984 }, 00:09:23.984 { 00:09:23.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.984 "dma_device_type": 2 00:09:23.984 } 00:09:23.984 ], 00:09:23.984 "driver_specific": {} 00:09:23.984 } 00:09:23.984 ] 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.984 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.243 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.243 "name": "Existed_Raid", 00:09:24.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.243 "strip_size_kb": 64, 00:09:24.243 "state": "configuring", 00:09:24.243 "raid_level": "raid0", 00:09:24.243 "superblock": false, 00:09:24.243 "num_base_bdevs": 3, 00:09:24.243 "num_base_bdevs_discovered": 2, 00:09:24.243 "num_base_bdevs_operational": 3, 00:09:24.243 "base_bdevs_list": [ 00:09:24.243 { 00:09:24.243 "name": "BaseBdev1", 00:09:24.243 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:24.243 "is_configured": true, 00:09:24.243 "data_offset": 0, 00:09:24.243 "data_size": 65536 00:09:24.243 }, 00:09:24.243 { 00:09:24.243 "name": null, 00:09:24.243 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:24.243 "is_configured": false, 00:09:24.243 "data_offset": 0, 00:09:24.243 "data_size": 65536 00:09:24.243 }, 00:09:24.243 { 00:09:24.243 "name": "BaseBdev3", 00:09:24.243 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:24.243 "is_configured": true, 00:09:24.243 "data_offset": 0, 00:09:24.243 "data_size": 65536 00:09:24.243 } 00:09:24.243 ] 00:09:24.243 }' 00:09:24.243 11:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.243 11:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.526 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.526 [2024-11-15 11:21:07.472548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.784 "name": "Existed_Raid", 00:09:24.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.784 "strip_size_kb": 64, 00:09:24.784 "state": "configuring", 00:09:24.784 "raid_level": "raid0", 00:09:24.784 "superblock": false, 00:09:24.784 "num_base_bdevs": 3, 00:09:24.784 "num_base_bdevs_discovered": 1, 00:09:24.784 "num_base_bdevs_operational": 3, 00:09:24.784 "base_bdevs_list": [ 00:09:24.784 { 00:09:24.784 "name": "BaseBdev1", 00:09:24.784 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:24.784 "is_configured": true, 00:09:24.784 "data_offset": 0, 00:09:24.784 "data_size": 65536 00:09:24.784 }, 00:09:24.784 { 00:09:24.784 "name": null, 00:09:24.784 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:24.784 "is_configured": false, 00:09:24.784 "data_offset": 0, 00:09:24.784 "data_size": 65536 00:09:24.784 }, 00:09:24.784 { 00:09:24.784 "name": null, 00:09:24.784 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:24.784 "is_configured": false, 00:09:24.784 "data_offset": 0, 00:09:24.784 "data_size": 65536 00:09:24.784 } 00:09:24.784 ] 00:09:24.784 }' 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.784 11:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.350 [2024-11-15 11:21:08.060710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.350 "name": "Existed_Raid", 00:09:25.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.350 "strip_size_kb": 64, 00:09:25.350 "state": "configuring", 00:09:25.350 "raid_level": "raid0", 00:09:25.350 "superblock": false, 00:09:25.350 "num_base_bdevs": 3, 00:09:25.350 "num_base_bdevs_discovered": 2, 00:09:25.350 "num_base_bdevs_operational": 3, 00:09:25.350 "base_bdevs_list": [ 00:09:25.350 { 00:09:25.350 "name": "BaseBdev1", 00:09:25.350 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:25.350 "is_configured": true, 00:09:25.350 "data_offset": 0, 00:09:25.350 "data_size": 65536 00:09:25.350 }, 00:09:25.350 { 00:09:25.350 "name": null, 00:09:25.350 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:25.350 "is_configured": false, 00:09:25.350 "data_offset": 0, 00:09:25.350 "data_size": 65536 00:09:25.350 }, 00:09:25.350 { 00:09:25.350 "name": "BaseBdev3", 00:09:25.350 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:25.350 "is_configured": true, 00:09:25.350 "data_offset": 0, 00:09:25.350 "data_size": 65536 00:09:25.350 } 00:09:25.350 ] 00:09:25.350 }' 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.350 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.916 [2024-11-15 11:21:08.632931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.916 "name": "Existed_Raid", 00:09:25.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.916 "strip_size_kb": 64, 00:09:25.916 "state": "configuring", 00:09:25.916 "raid_level": "raid0", 00:09:25.916 "superblock": false, 00:09:25.916 "num_base_bdevs": 3, 00:09:25.916 "num_base_bdevs_discovered": 1, 00:09:25.916 "num_base_bdevs_operational": 3, 00:09:25.916 "base_bdevs_list": [ 00:09:25.916 { 00:09:25.916 "name": null, 00:09:25.916 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:25.916 "is_configured": false, 00:09:25.916 "data_offset": 0, 00:09:25.916 "data_size": 65536 00:09:25.916 }, 00:09:25.916 { 00:09:25.916 "name": null, 00:09:25.916 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:25.916 "is_configured": false, 00:09:25.916 "data_offset": 0, 00:09:25.916 "data_size": 65536 00:09:25.916 }, 00:09:25.916 { 00:09:25.916 "name": "BaseBdev3", 00:09:25.916 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:25.916 "is_configured": true, 00:09:25.916 "data_offset": 0, 00:09:25.916 "data_size": 65536 00:09:25.916 } 00:09:25.916 ] 00:09:25.916 }' 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.916 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.483 [2024-11-15 11:21:09.313397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.483 "name": "Existed_Raid", 00:09:26.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.483 "strip_size_kb": 64, 00:09:26.483 "state": "configuring", 00:09:26.483 "raid_level": "raid0", 00:09:26.483 "superblock": false, 00:09:26.483 "num_base_bdevs": 3, 00:09:26.483 "num_base_bdevs_discovered": 2, 00:09:26.483 "num_base_bdevs_operational": 3, 00:09:26.483 "base_bdevs_list": [ 00:09:26.483 { 00:09:26.483 "name": null, 00:09:26.483 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:26.483 "is_configured": false, 00:09:26.483 "data_offset": 0, 00:09:26.483 "data_size": 65536 00:09:26.483 }, 00:09:26.483 { 00:09:26.483 "name": "BaseBdev2", 00:09:26.483 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:26.483 "is_configured": true, 00:09:26.483 "data_offset": 0, 00:09:26.483 "data_size": 65536 00:09:26.483 }, 00:09:26.483 { 00:09:26.483 "name": "BaseBdev3", 00:09:26.483 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:26.483 "is_configured": true, 00:09:26.483 "data_offset": 0, 00:09:26.483 "data_size": 65536 00:09:26.483 } 00:09:26.483 ] 00:09:26.483 }' 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.483 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.050 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.309 [2024-11-15 11:21:10.011811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:27.309 [2024-11-15 11:21:10.011883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:27.309 [2024-11-15 11:21:10.011898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:27.309 [2024-11-15 11:21:10.012242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:27.309 [2024-11-15 11:21:10.012448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:27.309 [2024-11-15 11:21:10.012463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:27.309 [2024-11-15 11:21:10.012825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.309 NewBaseBdev 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.309 [ 00:09:27.309 { 00:09:27.309 "name": "NewBaseBdev", 00:09:27.309 "aliases": [ 00:09:27.309 "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3" 00:09:27.309 ], 00:09:27.309 "product_name": "Malloc disk", 00:09:27.309 "block_size": 512, 00:09:27.309 "num_blocks": 65536, 00:09:27.309 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:27.309 "assigned_rate_limits": { 00:09:27.309 "rw_ios_per_sec": 0, 00:09:27.309 "rw_mbytes_per_sec": 0, 00:09:27.309 "r_mbytes_per_sec": 0, 00:09:27.309 "w_mbytes_per_sec": 0 00:09:27.309 }, 00:09:27.309 "claimed": true, 00:09:27.309 "claim_type": "exclusive_write", 00:09:27.309 "zoned": false, 00:09:27.309 "supported_io_types": { 00:09:27.309 "read": true, 00:09:27.309 "write": true, 00:09:27.309 "unmap": true, 00:09:27.309 "flush": true, 00:09:27.309 "reset": true, 00:09:27.309 "nvme_admin": false, 00:09:27.309 "nvme_io": false, 00:09:27.309 "nvme_io_md": false, 00:09:27.309 "write_zeroes": true, 00:09:27.309 "zcopy": true, 00:09:27.309 "get_zone_info": false, 00:09:27.309 "zone_management": false, 00:09:27.309 "zone_append": false, 00:09:27.309 "compare": false, 00:09:27.309 "compare_and_write": false, 00:09:27.309 "abort": true, 00:09:27.309 "seek_hole": false, 00:09:27.309 "seek_data": false, 00:09:27.309 "copy": true, 00:09:27.309 "nvme_iov_md": false 00:09:27.309 }, 00:09:27.309 "memory_domains": [ 00:09:27.309 { 00:09:27.309 "dma_device_id": "system", 00:09:27.309 "dma_device_type": 1 00:09:27.309 }, 00:09:27.309 { 00:09:27.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.309 "dma_device_type": 2 00:09:27.309 } 00:09:27.309 ], 00:09:27.309 "driver_specific": {} 00:09:27.309 } 00:09:27.309 ] 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.309 "name": "Existed_Raid", 00:09:27.309 "uuid": "cd97e788-075f-40b4-8c57-bff449be5b18", 00:09:27.309 "strip_size_kb": 64, 00:09:27.309 "state": "online", 00:09:27.309 "raid_level": "raid0", 00:09:27.309 "superblock": false, 00:09:27.309 "num_base_bdevs": 3, 00:09:27.309 "num_base_bdevs_discovered": 3, 00:09:27.309 "num_base_bdevs_operational": 3, 00:09:27.309 "base_bdevs_list": [ 00:09:27.309 { 00:09:27.309 "name": "NewBaseBdev", 00:09:27.309 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:27.309 "is_configured": true, 00:09:27.309 "data_offset": 0, 00:09:27.309 "data_size": 65536 00:09:27.309 }, 00:09:27.309 { 00:09:27.309 "name": "BaseBdev2", 00:09:27.309 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:27.309 "is_configured": true, 00:09:27.309 "data_offset": 0, 00:09:27.309 "data_size": 65536 00:09:27.309 }, 00:09:27.309 { 00:09:27.309 "name": "BaseBdev3", 00:09:27.309 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:27.309 "is_configured": true, 00:09:27.309 "data_offset": 0, 00:09:27.309 "data_size": 65536 00:09:27.309 } 00:09:27.309 ] 00:09:27.309 }' 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.309 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.876 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.876 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.877 [2024-11-15 11:21:10.604438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.877 "name": "Existed_Raid", 00:09:27.877 "aliases": [ 00:09:27.877 "cd97e788-075f-40b4-8c57-bff449be5b18" 00:09:27.877 ], 00:09:27.877 "product_name": "Raid Volume", 00:09:27.877 "block_size": 512, 00:09:27.877 "num_blocks": 196608, 00:09:27.877 "uuid": "cd97e788-075f-40b4-8c57-bff449be5b18", 00:09:27.877 "assigned_rate_limits": { 00:09:27.877 "rw_ios_per_sec": 0, 00:09:27.877 "rw_mbytes_per_sec": 0, 00:09:27.877 "r_mbytes_per_sec": 0, 00:09:27.877 "w_mbytes_per_sec": 0 00:09:27.877 }, 00:09:27.877 "claimed": false, 00:09:27.877 "zoned": false, 00:09:27.877 "supported_io_types": { 00:09:27.877 "read": true, 00:09:27.877 "write": true, 00:09:27.877 "unmap": true, 00:09:27.877 "flush": true, 00:09:27.877 "reset": true, 00:09:27.877 "nvme_admin": false, 00:09:27.877 "nvme_io": false, 00:09:27.877 "nvme_io_md": false, 00:09:27.877 "write_zeroes": true, 00:09:27.877 "zcopy": false, 00:09:27.877 "get_zone_info": false, 00:09:27.877 "zone_management": false, 00:09:27.877 "zone_append": false, 00:09:27.877 "compare": false, 00:09:27.877 "compare_and_write": false, 00:09:27.877 "abort": false, 00:09:27.877 "seek_hole": false, 00:09:27.877 "seek_data": false, 00:09:27.877 "copy": false, 00:09:27.877 "nvme_iov_md": false 00:09:27.877 }, 00:09:27.877 "memory_domains": [ 00:09:27.877 { 00:09:27.877 "dma_device_id": "system", 00:09:27.877 "dma_device_type": 1 00:09:27.877 }, 00:09:27.877 { 00:09:27.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.877 "dma_device_type": 2 00:09:27.877 }, 00:09:27.877 { 00:09:27.877 "dma_device_id": "system", 00:09:27.877 "dma_device_type": 1 00:09:27.877 }, 00:09:27.877 { 00:09:27.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.877 "dma_device_type": 2 00:09:27.877 }, 00:09:27.877 { 00:09:27.877 "dma_device_id": "system", 00:09:27.877 "dma_device_type": 1 00:09:27.877 }, 00:09:27.877 { 00:09:27.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.877 "dma_device_type": 2 00:09:27.877 } 00:09:27.877 ], 00:09:27.877 "driver_specific": { 00:09:27.877 "raid": { 00:09:27.877 "uuid": "cd97e788-075f-40b4-8c57-bff449be5b18", 00:09:27.877 "strip_size_kb": 64, 00:09:27.877 "state": "online", 00:09:27.877 "raid_level": "raid0", 00:09:27.877 "superblock": false, 00:09:27.877 "num_base_bdevs": 3, 00:09:27.877 "num_base_bdevs_discovered": 3, 00:09:27.877 "num_base_bdevs_operational": 3, 00:09:27.877 "base_bdevs_list": [ 00:09:27.877 { 00:09:27.877 "name": "NewBaseBdev", 00:09:27.877 "uuid": "ba30a9ae-04fd-4a7b-967b-8cb90ea7a8e3", 00:09:27.877 "is_configured": true, 00:09:27.877 "data_offset": 0, 00:09:27.877 "data_size": 65536 00:09:27.877 }, 00:09:27.877 { 00:09:27.877 "name": "BaseBdev2", 00:09:27.877 "uuid": "31e949d1-9a73-4a8c-ac69-710095bfbbf9", 00:09:27.877 "is_configured": true, 00:09:27.877 "data_offset": 0, 00:09:27.877 "data_size": 65536 00:09:27.877 }, 00:09:27.877 { 00:09:27.877 "name": "BaseBdev3", 00:09:27.877 "uuid": "2ecacbc8-6191-4e72-b7ac-e383c0cb1a68", 00:09:27.877 "is_configured": true, 00:09:27.877 "data_offset": 0, 00:09:27.877 "data_size": 65536 00:09:27.877 } 00:09:27.877 ] 00:09:27.877 } 00:09:27.877 } 00:09:27.877 }' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:27.877 BaseBdev2 00:09:27.877 BaseBdev3' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.877 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.136 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.136 [2024-11-15 11:21:10.932153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.136 [2024-11-15 11:21:10.932221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.137 [2024-11-15 11:21:10.932362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.137 [2024-11-15 11:21:10.932443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.137 [2024-11-15 11:21:10.932464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63663 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63663 ']' 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63663 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63663 00:09:28.137 killing process with pid 63663 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63663' 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63663 00:09:28.137 [2024-11-15 11:21:10.972947] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.137 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63663 00:09:28.395 [2024-11-15 11:21:11.243669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.331 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:29.331 ************************************ 00:09:29.331 END TEST raid_state_function_test 00:09:29.331 ************************************ 00:09:29.331 00:09:29.331 real 0m12.107s 00:09:29.331 user 0m20.010s 00:09:29.331 sys 0m1.793s 00:09:29.331 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:29.331 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.589 11:21:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:29.589 11:21:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:29.589 11:21:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:29.589 11:21:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.589 ************************************ 00:09:29.589 START TEST raid_state_function_test_sb 00:09:29.589 ************************************ 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:29.589 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64306 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:29.590 Process raid pid: 64306 00:09:29.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64306' 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64306 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64306 ']' 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:29.590 11:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.590 [2024-11-15 11:21:12.421788] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:29.590 [2024-11-15 11:21:12.422367] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.848 [2024-11-15 11:21:12.598047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.848 [2024-11-15 11:21:12.734464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.106 [2024-11-15 11:21:12.990890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.106 [2024-11-15 11:21:12.990978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.673 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:30.673 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:30.673 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.673 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.673 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.673 [2024-11-15 11:21:13.443613] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.673 [2024-11-15 11:21:13.443708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.673 [2024-11-15 11:21:13.443725] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.673 [2024-11-15 11:21:13.443741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.674 [2024-11-15 11:21:13.443750] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.674 [2024-11-15 11:21:13.443764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.674 "name": "Existed_Raid", 00:09:30.674 "uuid": "67517a64-e95f-4a26-b25d-d8162f900926", 00:09:30.674 "strip_size_kb": 64, 00:09:30.674 "state": "configuring", 00:09:30.674 "raid_level": "raid0", 00:09:30.674 "superblock": true, 00:09:30.674 "num_base_bdevs": 3, 00:09:30.674 "num_base_bdevs_discovered": 0, 00:09:30.674 "num_base_bdevs_operational": 3, 00:09:30.674 "base_bdevs_list": [ 00:09:30.674 { 00:09:30.674 "name": "BaseBdev1", 00:09:30.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.674 "is_configured": false, 00:09:30.674 "data_offset": 0, 00:09:30.674 "data_size": 0 00:09:30.674 }, 00:09:30.674 { 00:09:30.674 "name": "BaseBdev2", 00:09:30.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.674 "is_configured": false, 00:09:30.674 "data_offset": 0, 00:09:30.674 "data_size": 0 00:09:30.674 }, 00:09:30.674 { 00:09:30.674 "name": "BaseBdev3", 00:09:30.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.674 "is_configured": false, 00:09:30.674 "data_offset": 0, 00:09:30.674 "data_size": 0 00:09:30.674 } 00:09:30.674 ] 00:09:30.674 }' 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.674 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 [2024-11-15 11:21:13.975705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.242 [2024-11-15 11:21:13.975897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 [2024-11-15 11:21:13.983669] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.242 [2024-11-15 11:21:13.983727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.242 [2024-11-15 11:21:13.983743] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.242 [2024-11-15 11:21:13.983761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.242 [2024-11-15 11:21:13.983771] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.242 [2024-11-15 11:21:13.983787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.242 11:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 [2024-11-15 11:21:14.034634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.242 BaseBdev1 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.242 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 [ 00:09:31.242 { 00:09:31.242 "name": "BaseBdev1", 00:09:31.242 "aliases": [ 00:09:31.242 "51177d65-d48f-4c84-9521-f24a406984a1" 00:09:31.242 ], 00:09:31.242 "product_name": "Malloc disk", 00:09:31.242 "block_size": 512, 00:09:31.242 "num_blocks": 65536, 00:09:31.242 "uuid": "51177d65-d48f-4c84-9521-f24a406984a1", 00:09:31.242 "assigned_rate_limits": { 00:09:31.242 "rw_ios_per_sec": 0, 00:09:31.242 "rw_mbytes_per_sec": 0, 00:09:31.242 "r_mbytes_per_sec": 0, 00:09:31.242 "w_mbytes_per_sec": 0 00:09:31.242 }, 00:09:31.242 "claimed": true, 00:09:31.242 "claim_type": "exclusive_write", 00:09:31.242 "zoned": false, 00:09:31.242 "supported_io_types": { 00:09:31.242 "read": true, 00:09:31.242 "write": true, 00:09:31.242 "unmap": true, 00:09:31.242 "flush": true, 00:09:31.242 "reset": true, 00:09:31.242 "nvme_admin": false, 00:09:31.243 "nvme_io": false, 00:09:31.243 "nvme_io_md": false, 00:09:31.243 "write_zeroes": true, 00:09:31.243 "zcopy": true, 00:09:31.243 "get_zone_info": false, 00:09:31.243 "zone_management": false, 00:09:31.243 "zone_append": false, 00:09:31.243 "compare": false, 00:09:31.243 "compare_and_write": false, 00:09:31.243 "abort": true, 00:09:31.243 "seek_hole": false, 00:09:31.243 "seek_data": false, 00:09:31.243 "copy": true, 00:09:31.243 "nvme_iov_md": false 00:09:31.243 }, 00:09:31.243 "memory_domains": [ 00:09:31.243 { 00:09:31.243 "dma_device_id": "system", 00:09:31.243 "dma_device_type": 1 00:09:31.243 }, 00:09:31.243 { 00:09:31.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.243 "dma_device_type": 2 00:09:31.243 } 00:09:31.243 ], 00:09:31.243 "driver_specific": {} 00:09:31.243 } 00:09:31.243 ] 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.243 "name": "Existed_Raid", 00:09:31.243 "uuid": "5ab06bef-a83a-4657-aaa7-dbb4d9853be7", 00:09:31.243 "strip_size_kb": 64, 00:09:31.243 "state": "configuring", 00:09:31.243 "raid_level": "raid0", 00:09:31.243 "superblock": true, 00:09:31.243 "num_base_bdevs": 3, 00:09:31.243 "num_base_bdevs_discovered": 1, 00:09:31.243 "num_base_bdevs_operational": 3, 00:09:31.243 "base_bdevs_list": [ 00:09:31.243 { 00:09:31.243 "name": "BaseBdev1", 00:09:31.243 "uuid": "51177d65-d48f-4c84-9521-f24a406984a1", 00:09:31.243 "is_configured": true, 00:09:31.243 "data_offset": 2048, 00:09:31.243 "data_size": 63488 00:09:31.243 }, 00:09:31.243 { 00:09:31.243 "name": "BaseBdev2", 00:09:31.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.243 "is_configured": false, 00:09:31.243 "data_offset": 0, 00:09:31.243 "data_size": 0 00:09:31.243 }, 00:09:31.243 { 00:09:31.243 "name": "BaseBdev3", 00:09:31.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.243 "is_configured": false, 00:09:31.243 "data_offset": 0, 00:09:31.243 "data_size": 0 00:09:31.243 } 00:09:31.243 ] 00:09:31.243 }' 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.243 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.810 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.810 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.810 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.810 [2024-11-15 11:21:14.614848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.810 [2024-11-15 11:21:14.615115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:31.810 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.810 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.810 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.810 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.810 [2024-11-15 11:21:14.622884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.811 [2024-11-15 11:21:14.625495] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.811 [2024-11-15 11:21:14.625553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.811 [2024-11-15 11:21:14.625571] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.811 [2024-11-15 11:21:14.625587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.811 "name": "Existed_Raid", 00:09:31.811 "uuid": "60f84738-2ca6-4b3a-a9a1-c3a06ff3594c", 00:09:31.811 "strip_size_kb": 64, 00:09:31.811 "state": "configuring", 00:09:31.811 "raid_level": "raid0", 00:09:31.811 "superblock": true, 00:09:31.811 "num_base_bdevs": 3, 00:09:31.811 "num_base_bdevs_discovered": 1, 00:09:31.811 "num_base_bdevs_operational": 3, 00:09:31.811 "base_bdevs_list": [ 00:09:31.811 { 00:09:31.811 "name": "BaseBdev1", 00:09:31.811 "uuid": "51177d65-d48f-4c84-9521-f24a406984a1", 00:09:31.811 "is_configured": true, 00:09:31.811 "data_offset": 2048, 00:09:31.811 "data_size": 63488 00:09:31.811 }, 00:09:31.811 { 00:09:31.811 "name": "BaseBdev2", 00:09:31.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.811 "is_configured": false, 00:09:31.811 "data_offset": 0, 00:09:31.811 "data_size": 0 00:09:31.811 }, 00:09:31.811 { 00:09:31.811 "name": "BaseBdev3", 00:09:31.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.811 "is_configured": false, 00:09:31.811 "data_offset": 0, 00:09:31.811 "data_size": 0 00:09:31.811 } 00:09:31.811 ] 00:09:31.811 }' 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.811 11:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.375 [2024-11-15 11:21:15.183617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.375 BaseBdev2 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.375 [ 00:09:32.375 { 00:09:32.375 "name": "BaseBdev2", 00:09:32.375 "aliases": [ 00:09:32.375 "6777e78a-aa41-4d6f-b53e-e3013863c340" 00:09:32.375 ], 00:09:32.375 "product_name": "Malloc disk", 00:09:32.375 "block_size": 512, 00:09:32.375 "num_blocks": 65536, 00:09:32.375 "uuid": "6777e78a-aa41-4d6f-b53e-e3013863c340", 00:09:32.375 "assigned_rate_limits": { 00:09:32.375 "rw_ios_per_sec": 0, 00:09:32.375 "rw_mbytes_per_sec": 0, 00:09:32.375 "r_mbytes_per_sec": 0, 00:09:32.375 "w_mbytes_per_sec": 0 00:09:32.375 }, 00:09:32.375 "claimed": true, 00:09:32.375 "claim_type": "exclusive_write", 00:09:32.375 "zoned": false, 00:09:32.375 "supported_io_types": { 00:09:32.375 "read": true, 00:09:32.375 "write": true, 00:09:32.375 "unmap": true, 00:09:32.375 "flush": true, 00:09:32.375 "reset": true, 00:09:32.375 "nvme_admin": false, 00:09:32.375 "nvme_io": false, 00:09:32.375 "nvme_io_md": false, 00:09:32.375 "write_zeroes": true, 00:09:32.375 "zcopy": true, 00:09:32.375 "get_zone_info": false, 00:09:32.375 "zone_management": false, 00:09:32.375 "zone_append": false, 00:09:32.375 "compare": false, 00:09:32.375 "compare_and_write": false, 00:09:32.375 "abort": true, 00:09:32.375 "seek_hole": false, 00:09:32.375 "seek_data": false, 00:09:32.375 "copy": true, 00:09:32.375 "nvme_iov_md": false 00:09:32.375 }, 00:09:32.375 "memory_domains": [ 00:09:32.375 { 00:09:32.375 "dma_device_id": "system", 00:09:32.375 "dma_device_type": 1 00:09:32.375 }, 00:09:32.375 { 00:09:32.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.375 "dma_device_type": 2 00:09:32.375 } 00:09:32.375 ], 00:09:32.375 "driver_specific": {} 00:09:32.375 } 00:09:32.375 ] 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.375 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.375 "name": "Existed_Raid", 00:09:32.375 "uuid": "60f84738-2ca6-4b3a-a9a1-c3a06ff3594c", 00:09:32.375 "strip_size_kb": 64, 00:09:32.375 "state": "configuring", 00:09:32.375 "raid_level": "raid0", 00:09:32.375 "superblock": true, 00:09:32.375 "num_base_bdevs": 3, 00:09:32.375 "num_base_bdevs_discovered": 2, 00:09:32.375 "num_base_bdevs_operational": 3, 00:09:32.375 "base_bdevs_list": [ 00:09:32.375 { 00:09:32.375 "name": "BaseBdev1", 00:09:32.375 "uuid": "51177d65-d48f-4c84-9521-f24a406984a1", 00:09:32.375 "is_configured": true, 00:09:32.375 "data_offset": 2048, 00:09:32.375 "data_size": 63488 00:09:32.375 }, 00:09:32.375 { 00:09:32.375 "name": "BaseBdev2", 00:09:32.375 "uuid": "6777e78a-aa41-4d6f-b53e-e3013863c340", 00:09:32.375 "is_configured": true, 00:09:32.375 "data_offset": 2048, 00:09:32.375 "data_size": 63488 00:09:32.375 }, 00:09:32.376 { 00:09:32.376 "name": "BaseBdev3", 00:09:32.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.376 "is_configured": false, 00:09:32.376 "data_offset": 0, 00:09:32.376 "data_size": 0 00:09:32.376 } 00:09:32.376 ] 00:09:32.376 }' 00:09:32.376 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.376 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.968 [2024-11-15 11:21:15.823915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.968 BaseBdev3 00:09:32.968 [2024-11-15 11:21:15.824419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:32.968 [2024-11-15 11:21:15.824460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:32.968 [2024-11-15 11:21:15.824900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:32.968 [2024-11-15 11:21:15.825225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:32.968 [2024-11-15 11:21:15.825248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.968 [2024-11-15 11:21:15.825475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.968 [ 00:09:32.968 { 00:09:32.968 "name": "BaseBdev3", 00:09:32.968 "aliases": [ 00:09:32.968 "35b1259b-a59e-4c52-9dfd-a80477de119d" 00:09:32.968 ], 00:09:32.968 "product_name": "Malloc disk", 00:09:32.968 "block_size": 512, 00:09:32.968 "num_blocks": 65536, 00:09:32.968 "uuid": "35b1259b-a59e-4c52-9dfd-a80477de119d", 00:09:32.968 "assigned_rate_limits": { 00:09:32.968 "rw_ios_per_sec": 0, 00:09:32.968 "rw_mbytes_per_sec": 0, 00:09:32.968 "r_mbytes_per_sec": 0, 00:09:32.968 "w_mbytes_per_sec": 0 00:09:32.968 }, 00:09:32.968 "claimed": true, 00:09:32.968 "claim_type": "exclusive_write", 00:09:32.968 "zoned": false, 00:09:32.968 "supported_io_types": { 00:09:32.968 "read": true, 00:09:32.968 "write": true, 00:09:32.968 "unmap": true, 00:09:32.968 "flush": true, 00:09:32.968 "reset": true, 00:09:32.968 "nvme_admin": false, 00:09:32.968 "nvme_io": false, 00:09:32.968 "nvme_io_md": false, 00:09:32.968 "write_zeroes": true, 00:09:32.968 "zcopy": true, 00:09:32.968 "get_zone_info": false, 00:09:32.968 "zone_management": false, 00:09:32.968 "zone_append": false, 00:09:32.968 "compare": false, 00:09:32.968 "compare_and_write": false, 00:09:32.968 "abort": true, 00:09:32.968 "seek_hole": false, 00:09:32.968 "seek_data": false, 00:09:32.968 "copy": true, 00:09:32.968 "nvme_iov_md": false 00:09:32.968 }, 00:09:32.968 "memory_domains": [ 00:09:32.968 { 00:09:32.968 "dma_device_id": "system", 00:09:32.968 "dma_device_type": 1 00:09:32.968 }, 00:09:32.968 { 00:09:32.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.968 "dma_device_type": 2 00:09:32.968 } 00:09:32.968 ], 00:09:32.968 "driver_specific": {} 00:09:32.968 } 00:09:32.968 ] 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.968 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.226 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.226 "name": "Existed_Raid", 00:09:33.226 "uuid": "60f84738-2ca6-4b3a-a9a1-c3a06ff3594c", 00:09:33.226 "strip_size_kb": 64, 00:09:33.226 "state": "online", 00:09:33.226 "raid_level": "raid0", 00:09:33.226 "superblock": true, 00:09:33.226 "num_base_bdevs": 3, 00:09:33.226 "num_base_bdevs_discovered": 3, 00:09:33.226 "num_base_bdevs_operational": 3, 00:09:33.226 "base_bdevs_list": [ 00:09:33.226 { 00:09:33.226 "name": "BaseBdev1", 00:09:33.226 "uuid": "51177d65-d48f-4c84-9521-f24a406984a1", 00:09:33.226 "is_configured": true, 00:09:33.226 "data_offset": 2048, 00:09:33.226 "data_size": 63488 00:09:33.226 }, 00:09:33.226 { 00:09:33.226 "name": "BaseBdev2", 00:09:33.226 "uuid": "6777e78a-aa41-4d6f-b53e-e3013863c340", 00:09:33.226 "is_configured": true, 00:09:33.226 "data_offset": 2048, 00:09:33.226 "data_size": 63488 00:09:33.226 }, 00:09:33.226 { 00:09:33.226 "name": "BaseBdev3", 00:09:33.226 "uuid": "35b1259b-a59e-4c52-9dfd-a80477de119d", 00:09:33.226 "is_configured": true, 00:09:33.226 "data_offset": 2048, 00:09:33.226 "data_size": 63488 00:09:33.226 } 00:09:33.226 ] 00:09:33.226 }' 00:09:33.226 11:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.226 11:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.484 [2024-11-15 11:21:16.364709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.484 "name": "Existed_Raid", 00:09:33.484 "aliases": [ 00:09:33.484 "60f84738-2ca6-4b3a-a9a1-c3a06ff3594c" 00:09:33.484 ], 00:09:33.484 "product_name": "Raid Volume", 00:09:33.484 "block_size": 512, 00:09:33.484 "num_blocks": 190464, 00:09:33.484 "uuid": "60f84738-2ca6-4b3a-a9a1-c3a06ff3594c", 00:09:33.484 "assigned_rate_limits": { 00:09:33.484 "rw_ios_per_sec": 0, 00:09:33.484 "rw_mbytes_per_sec": 0, 00:09:33.484 "r_mbytes_per_sec": 0, 00:09:33.484 "w_mbytes_per_sec": 0 00:09:33.484 }, 00:09:33.484 "claimed": false, 00:09:33.484 "zoned": false, 00:09:33.484 "supported_io_types": { 00:09:33.484 "read": true, 00:09:33.484 "write": true, 00:09:33.484 "unmap": true, 00:09:33.484 "flush": true, 00:09:33.484 "reset": true, 00:09:33.484 "nvme_admin": false, 00:09:33.484 "nvme_io": false, 00:09:33.484 "nvme_io_md": false, 00:09:33.484 "write_zeroes": true, 00:09:33.484 "zcopy": false, 00:09:33.484 "get_zone_info": false, 00:09:33.484 "zone_management": false, 00:09:33.484 "zone_append": false, 00:09:33.484 "compare": false, 00:09:33.484 "compare_and_write": false, 00:09:33.484 "abort": false, 00:09:33.484 "seek_hole": false, 00:09:33.484 "seek_data": false, 00:09:33.484 "copy": false, 00:09:33.484 "nvme_iov_md": false 00:09:33.484 }, 00:09:33.484 "memory_domains": [ 00:09:33.484 { 00:09:33.484 "dma_device_id": "system", 00:09:33.484 "dma_device_type": 1 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.484 "dma_device_type": 2 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "dma_device_id": "system", 00:09:33.484 "dma_device_type": 1 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.484 "dma_device_type": 2 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "dma_device_id": "system", 00:09:33.484 "dma_device_type": 1 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.484 "dma_device_type": 2 00:09:33.484 } 00:09:33.484 ], 00:09:33.484 "driver_specific": { 00:09:33.484 "raid": { 00:09:33.484 "uuid": "60f84738-2ca6-4b3a-a9a1-c3a06ff3594c", 00:09:33.484 "strip_size_kb": 64, 00:09:33.484 "state": "online", 00:09:33.484 "raid_level": "raid0", 00:09:33.484 "superblock": true, 00:09:33.484 "num_base_bdevs": 3, 00:09:33.484 "num_base_bdevs_discovered": 3, 00:09:33.484 "num_base_bdevs_operational": 3, 00:09:33.484 "base_bdevs_list": [ 00:09:33.484 { 00:09:33.484 "name": "BaseBdev1", 00:09:33.484 "uuid": "51177d65-d48f-4c84-9521-f24a406984a1", 00:09:33.484 "is_configured": true, 00:09:33.484 "data_offset": 2048, 00:09:33.484 "data_size": 63488 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "name": "BaseBdev2", 00:09:33.484 "uuid": "6777e78a-aa41-4d6f-b53e-e3013863c340", 00:09:33.484 "is_configured": true, 00:09:33.484 "data_offset": 2048, 00:09:33.484 "data_size": 63488 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "name": "BaseBdev3", 00:09:33.484 "uuid": "35b1259b-a59e-4c52-9dfd-a80477de119d", 00:09:33.484 "is_configured": true, 00:09:33.484 "data_offset": 2048, 00:09:33.484 "data_size": 63488 00:09:33.484 } 00:09:33.484 ] 00:09:33.484 } 00:09:33.484 } 00:09:33.484 }' 00:09:33.484 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.743 BaseBdev2 00:09:33.743 BaseBdev3' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.743 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.001 [2024-11-15 11:21:16.692326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.001 [2024-11-15 11:21:16.692364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.001 [2024-11-15 11:21:16.692442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.001 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.001 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:34.001 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:34.001 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.001 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:34.001 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.002 "name": "Existed_Raid", 00:09:34.002 "uuid": "60f84738-2ca6-4b3a-a9a1-c3a06ff3594c", 00:09:34.002 "strip_size_kb": 64, 00:09:34.002 "state": "offline", 00:09:34.002 "raid_level": "raid0", 00:09:34.002 "superblock": true, 00:09:34.002 "num_base_bdevs": 3, 00:09:34.002 "num_base_bdevs_discovered": 2, 00:09:34.002 "num_base_bdevs_operational": 2, 00:09:34.002 "base_bdevs_list": [ 00:09:34.002 { 00:09:34.002 "name": null, 00:09:34.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.002 "is_configured": false, 00:09:34.002 "data_offset": 0, 00:09:34.002 "data_size": 63488 00:09:34.002 }, 00:09:34.002 { 00:09:34.002 "name": "BaseBdev2", 00:09:34.002 "uuid": "6777e78a-aa41-4d6f-b53e-e3013863c340", 00:09:34.002 "is_configured": true, 00:09:34.002 "data_offset": 2048, 00:09:34.002 "data_size": 63488 00:09:34.002 }, 00:09:34.002 { 00:09:34.002 "name": "BaseBdev3", 00:09:34.002 "uuid": "35b1259b-a59e-4c52-9dfd-a80477de119d", 00:09:34.002 "is_configured": true, 00:09:34.002 "data_offset": 2048, 00:09:34.002 "data_size": 63488 00:09:34.002 } 00:09:34.002 ] 00:09:34.002 }' 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.002 11:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.569 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.570 [2024-11-15 11:21:17.336790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.570 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.570 [2024-11-15 11:21:17.484239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.570 [2024-11-15 11:21:17.484333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.829 BaseBdev2 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.829 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.829 [ 00:09:34.829 { 00:09:34.829 "name": "BaseBdev2", 00:09:34.829 "aliases": [ 00:09:34.829 "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a" 00:09:34.829 ], 00:09:34.829 "product_name": "Malloc disk", 00:09:34.829 "block_size": 512, 00:09:34.830 "num_blocks": 65536, 00:09:34.830 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:34.830 "assigned_rate_limits": { 00:09:34.830 "rw_ios_per_sec": 0, 00:09:34.830 "rw_mbytes_per_sec": 0, 00:09:34.830 "r_mbytes_per_sec": 0, 00:09:34.830 "w_mbytes_per_sec": 0 00:09:34.830 }, 00:09:34.830 "claimed": false, 00:09:34.830 "zoned": false, 00:09:34.830 "supported_io_types": { 00:09:34.830 "read": true, 00:09:34.830 "write": true, 00:09:34.830 "unmap": true, 00:09:34.830 "flush": true, 00:09:34.830 "reset": true, 00:09:34.830 "nvme_admin": false, 00:09:34.830 "nvme_io": false, 00:09:34.830 "nvme_io_md": false, 00:09:34.830 "write_zeroes": true, 00:09:34.830 "zcopy": true, 00:09:34.830 "get_zone_info": false, 00:09:34.830 "zone_management": false, 00:09:34.830 "zone_append": false, 00:09:34.830 "compare": false, 00:09:34.830 "compare_and_write": false, 00:09:34.830 "abort": true, 00:09:34.830 "seek_hole": false, 00:09:34.830 "seek_data": false, 00:09:34.830 "copy": true, 00:09:34.830 "nvme_iov_md": false 00:09:34.830 }, 00:09:34.830 "memory_domains": [ 00:09:34.830 { 00:09:34.830 "dma_device_id": "system", 00:09:34.830 "dma_device_type": 1 00:09:34.830 }, 00:09:34.830 { 00:09:34.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.830 "dma_device_type": 2 00:09:34.830 } 00:09:34.830 ], 00:09:34.830 "driver_specific": {} 00:09:34.830 } 00:09:34.830 ] 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.830 BaseBdev3 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.830 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.830 [ 00:09:34.830 { 00:09:34.830 "name": "BaseBdev3", 00:09:35.089 "aliases": [ 00:09:35.089 "dcda2552-2395-4baf-9360-9db82e65da9e" 00:09:35.089 ], 00:09:35.089 "product_name": "Malloc disk", 00:09:35.089 "block_size": 512, 00:09:35.089 "num_blocks": 65536, 00:09:35.089 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:35.089 "assigned_rate_limits": { 00:09:35.089 "rw_ios_per_sec": 0, 00:09:35.089 "rw_mbytes_per_sec": 0, 00:09:35.089 "r_mbytes_per_sec": 0, 00:09:35.089 "w_mbytes_per_sec": 0 00:09:35.089 }, 00:09:35.089 "claimed": false, 00:09:35.089 "zoned": false, 00:09:35.089 "supported_io_types": { 00:09:35.089 "read": true, 00:09:35.089 "write": true, 00:09:35.089 "unmap": true, 00:09:35.089 "flush": true, 00:09:35.089 "reset": true, 00:09:35.089 "nvme_admin": false, 00:09:35.089 "nvme_io": false, 00:09:35.089 "nvme_io_md": false, 00:09:35.089 "write_zeroes": true, 00:09:35.089 "zcopy": true, 00:09:35.089 "get_zone_info": false, 00:09:35.089 "zone_management": false, 00:09:35.089 "zone_append": false, 00:09:35.089 "compare": false, 00:09:35.089 "compare_and_write": false, 00:09:35.089 "abort": true, 00:09:35.089 "seek_hole": false, 00:09:35.089 "seek_data": false, 00:09:35.089 "copy": true, 00:09:35.089 "nvme_iov_md": false 00:09:35.089 }, 00:09:35.089 "memory_domains": [ 00:09:35.089 { 00:09:35.089 "dma_device_id": "system", 00:09:35.089 "dma_device_type": 1 00:09:35.089 }, 00:09:35.089 { 00:09:35.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.089 "dma_device_type": 2 00:09:35.089 } 00:09:35.089 ], 00:09:35.089 "driver_specific": {} 00:09:35.089 } 00:09:35.089 ] 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.089 [2024-11-15 11:21:17.794050] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.089 [2024-11-15 11:21:17.794334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.089 [2024-11-15 11:21:17.794491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.089 [2024-11-15 11:21:17.797126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.089 "name": "Existed_Raid", 00:09:35.089 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:35.089 "strip_size_kb": 64, 00:09:35.089 "state": "configuring", 00:09:35.089 "raid_level": "raid0", 00:09:35.089 "superblock": true, 00:09:35.089 "num_base_bdevs": 3, 00:09:35.089 "num_base_bdevs_discovered": 2, 00:09:35.089 "num_base_bdevs_operational": 3, 00:09:35.089 "base_bdevs_list": [ 00:09:35.089 { 00:09:35.089 "name": "BaseBdev1", 00:09:35.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.089 "is_configured": false, 00:09:35.089 "data_offset": 0, 00:09:35.089 "data_size": 0 00:09:35.089 }, 00:09:35.089 { 00:09:35.089 "name": "BaseBdev2", 00:09:35.089 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:35.089 "is_configured": true, 00:09:35.089 "data_offset": 2048, 00:09:35.089 "data_size": 63488 00:09:35.089 }, 00:09:35.089 { 00:09:35.089 "name": "BaseBdev3", 00:09:35.089 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:35.089 "is_configured": true, 00:09:35.089 "data_offset": 2048, 00:09:35.089 "data_size": 63488 00:09:35.089 } 00:09:35.089 ] 00:09:35.089 }' 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.089 11:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.656 [2024-11-15 11:21:18.318282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.656 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.657 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.657 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.657 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.657 "name": "Existed_Raid", 00:09:35.657 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:35.657 "strip_size_kb": 64, 00:09:35.657 "state": "configuring", 00:09:35.657 "raid_level": "raid0", 00:09:35.657 "superblock": true, 00:09:35.657 "num_base_bdevs": 3, 00:09:35.657 "num_base_bdevs_discovered": 1, 00:09:35.657 "num_base_bdevs_operational": 3, 00:09:35.657 "base_bdevs_list": [ 00:09:35.657 { 00:09:35.657 "name": "BaseBdev1", 00:09:35.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.657 "is_configured": false, 00:09:35.657 "data_offset": 0, 00:09:35.657 "data_size": 0 00:09:35.657 }, 00:09:35.657 { 00:09:35.657 "name": null, 00:09:35.657 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:35.657 "is_configured": false, 00:09:35.657 "data_offset": 0, 00:09:35.657 "data_size": 63488 00:09:35.657 }, 00:09:35.657 { 00:09:35.657 "name": "BaseBdev3", 00:09:35.657 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:35.657 "is_configured": true, 00:09:35.657 "data_offset": 2048, 00:09:35.657 "data_size": 63488 00:09:35.657 } 00:09:35.657 ] 00:09:35.657 }' 00:09:35.657 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.657 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.915 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.915 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:35.915 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.915 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.915 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.176 [2024-11-15 11:21:18.940100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.176 BaseBdev1 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.176 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.176 [ 00:09:36.176 { 00:09:36.176 "name": "BaseBdev1", 00:09:36.176 "aliases": [ 00:09:36.176 "605f5fcf-ecb4-4a27-b28f-20c93588af65" 00:09:36.176 ], 00:09:36.176 "product_name": "Malloc disk", 00:09:36.176 "block_size": 512, 00:09:36.176 "num_blocks": 65536, 00:09:36.176 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:36.176 "assigned_rate_limits": { 00:09:36.176 "rw_ios_per_sec": 0, 00:09:36.176 "rw_mbytes_per_sec": 0, 00:09:36.176 "r_mbytes_per_sec": 0, 00:09:36.176 "w_mbytes_per_sec": 0 00:09:36.176 }, 00:09:36.176 "claimed": true, 00:09:36.176 "claim_type": "exclusive_write", 00:09:36.176 "zoned": false, 00:09:36.176 "supported_io_types": { 00:09:36.176 "read": true, 00:09:36.176 "write": true, 00:09:36.176 "unmap": true, 00:09:36.176 "flush": true, 00:09:36.176 "reset": true, 00:09:36.176 "nvme_admin": false, 00:09:36.176 "nvme_io": false, 00:09:36.176 "nvme_io_md": false, 00:09:36.176 "write_zeroes": true, 00:09:36.176 "zcopy": true, 00:09:36.176 "get_zone_info": false, 00:09:36.176 "zone_management": false, 00:09:36.176 "zone_append": false, 00:09:36.176 "compare": false, 00:09:36.176 "compare_and_write": false, 00:09:36.176 "abort": true, 00:09:36.176 "seek_hole": false, 00:09:36.176 "seek_data": false, 00:09:36.176 "copy": true, 00:09:36.177 "nvme_iov_md": false 00:09:36.177 }, 00:09:36.177 "memory_domains": [ 00:09:36.177 { 00:09:36.177 "dma_device_id": "system", 00:09:36.177 "dma_device_type": 1 00:09:36.177 }, 00:09:36.177 { 00:09:36.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.177 "dma_device_type": 2 00:09:36.177 } 00:09:36.177 ], 00:09:36.177 "driver_specific": {} 00:09:36.177 } 00:09:36.177 ] 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.177 11:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.177 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.177 "name": "Existed_Raid", 00:09:36.177 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:36.177 "strip_size_kb": 64, 00:09:36.177 "state": "configuring", 00:09:36.177 "raid_level": "raid0", 00:09:36.177 "superblock": true, 00:09:36.177 "num_base_bdevs": 3, 00:09:36.177 "num_base_bdevs_discovered": 2, 00:09:36.177 "num_base_bdevs_operational": 3, 00:09:36.177 "base_bdevs_list": [ 00:09:36.177 { 00:09:36.177 "name": "BaseBdev1", 00:09:36.177 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:36.177 "is_configured": true, 00:09:36.177 "data_offset": 2048, 00:09:36.177 "data_size": 63488 00:09:36.177 }, 00:09:36.177 { 00:09:36.177 "name": null, 00:09:36.177 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:36.177 "is_configured": false, 00:09:36.177 "data_offset": 0, 00:09:36.177 "data_size": 63488 00:09:36.177 }, 00:09:36.177 { 00:09:36.177 "name": "BaseBdev3", 00:09:36.177 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:36.177 "is_configured": true, 00:09:36.177 "data_offset": 2048, 00:09:36.177 "data_size": 63488 00:09:36.177 } 00:09:36.177 ] 00:09:36.177 }' 00:09:36.177 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.177 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.744 [2024-11-15 11:21:19.548465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.744 "name": "Existed_Raid", 00:09:36.744 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:36.744 "strip_size_kb": 64, 00:09:36.744 "state": "configuring", 00:09:36.744 "raid_level": "raid0", 00:09:36.744 "superblock": true, 00:09:36.744 "num_base_bdevs": 3, 00:09:36.744 "num_base_bdevs_discovered": 1, 00:09:36.744 "num_base_bdevs_operational": 3, 00:09:36.744 "base_bdevs_list": [ 00:09:36.744 { 00:09:36.744 "name": "BaseBdev1", 00:09:36.744 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:36.744 "is_configured": true, 00:09:36.744 "data_offset": 2048, 00:09:36.744 "data_size": 63488 00:09:36.744 }, 00:09:36.744 { 00:09:36.744 "name": null, 00:09:36.744 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:36.744 "is_configured": false, 00:09:36.744 "data_offset": 0, 00:09:36.744 "data_size": 63488 00:09:36.744 }, 00:09:36.744 { 00:09:36.744 "name": null, 00:09:36.744 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:36.744 "is_configured": false, 00:09:36.744 "data_offset": 0, 00:09:36.744 "data_size": 63488 00:09:36.744 } 00:09:36.744 ] 00:09:36.744 }' 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.744 11:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.313 [2024-11-15 11:21:20.116706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.313 "name": "Existed_Raid", 00:09:37.313 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:37.313 "strip_size_kb": 64, 00:09:37.313 "state": "configuring", 00:09:37.313 "raid_level": "raid0", 00:09:37.313 "superblock": true, 00:09:37.313 "num_base_bdevs": 3, 00:09:37.313 "num_base_bdevs_discovered": 2, 00:09:37.313 "num_base_bdevs_operational": 3, 00:09:37.313 "base_bdevs_list": [ 00:09:37.313 { 00:09:37.313 "name": "BaseBdev1", 00:09:37.313 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:37.313 "is_configured": true, 00:09:37.313 "data_offset": 2048, 00:09:37.313 "data_size": 63488 00:09:37.313 }, 00:09:37.313 { 00:09:37.313 "name": null, 00:09:37.313 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:37.313 "is_configured": false, 00:09:37.313 "data_offset": 0, 00:09:37.313 "data_size": 63488 00:09:37.313 }, 00:09:37.313 { 00:09:37.313 "name": "BaseBdev3", 00:09:37.313 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:37.313 "is_configured": true, 00:09:37.313 "data_offset": 2048, 00:09:37.313 "data_size": 63488 00:09:37.313 } 00:09:37.313 ] 00:09:37.313 }' 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.313 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.881 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.881 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.882 [2024-11-15 11:21:20.680998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.882 "name": "Existed_Raid", 00:09:37.882 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:37.882 "strip_size_kb": 64, 00:09:37.882 "state": "configuring", 00:09:37.882 "raid_level": "raid0", 00:09:37.882 "superblock": true, 00:09:37.882 "num_base_bdevs": 3, 00:09:37.882 "num_base_bdevs_discovered": 1, 00:09:37.882 "num_base_bdevs_operational": 3, 00:09:37.882 "base_bdevs_list": [ 00:09:37.882 { 00:09:37.882 "name": null, 00:09:37.882 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:37.882 "is_configured": false, 00:09:37.882 "data_offset": 0, 00:09:37.882 "data_size": 63488 00:09:37.882 }, 00:09:37.882 { 00:09:37.882 "name": null, 00:09:37.882 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:37.882 "is_configured": false, 00:09:37.882 "data_offset": 0, 00:09:37.882 "data_size": 63488 00:09:37.882 }, 00:09:37.882 { 00:09:37.882 "name": "BaseBdev3", 00:09:37.882 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:37.882 "is_configured": true, 00:09:37.882 "data_offset": 2048, 00:09:37.882 "data_size": 63488 00:09:37.882 } 00:09:37.882 ] 00:09:37.882 }' 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.882 11:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.488 [2024-11-15 11:21:21.357518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.488 "name": "Existed_Raid", 00:09:38.488 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:38.488 "strip_size_kb": 64, 00:09:38.488 "state": "configuring", 00:09:38.488 "raid_level": "raid0", 00:09:38.488 "superblock": true, 00:09:38.488 "num_base_bdevs": 3, 00:09:38.488 "num_base_bdevs_discovered": 2, 00:09:38.488 "num_base_bdevs_operational": 3, 00:09:38.488 "base_bdevs_list": [ 00:09:38.488 { 00:09:38.488 "name": null, 00:09:38.488 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:38.488 "is_configured": false, 00:09:38.488 "data_offset": 0, 00:09:38.488 "data_size": 63488 00:09:38.488 }, 00:09:38.488 { 00:09:38.488 "name": "BaseBdev2", 00:09:38.488 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:38.488 "is_configured": true, 00:09:38.488 "data_offset": 2048, 00:09:38.488 "data_size": 63488 00:09:38.488 }, 00:09:38.488 { 00:09:38.488 "name": "BaseBdev3", 00:09:38.488 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:38.488 "is_configured": true, 00:09:38.488 "data_offset": 2048, 00:09:38.488 "data_size": 63488 00:09:38.488 } 00:09:38.488 ] 00:09:38.488 }' 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.488 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 605f5fcf-ecb4-4a27-b28f-20c93588af65 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.055 11:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.315 [2024-11-15 11:21:22.028424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.315 [2024-11-15 11:21:22.028728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:39.315 [2024-11-15 11:21:22.028753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.315 [2024-11-15 11:21:22.029100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:39.315 NewBaseBdev 00:09:39.315 [2024-11-15 11:21:22.029326] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:39.315 [2024-11-15 11:21:22.029344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:39.315 [2024-11-15 11:21:22.029518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.315 [ 00:09:39.315 { 00:09:39.315 "name": "NewBaseBdev", 00:09:39.315 "aliases": [ 00:09:39.315 "605f5fcf-ecb4-4a27-b28f-20c93588af65" 00:09:39.315 ], 00:09:39.315 "product_name": "Malloc disk", 00:09:39.315 "block_size": 512, 00:09:39.315 "num_blocks": 65536, 00:09:39.315 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:39.315 "assigned_rate_limits": { 00:09:39.315 "rw_ios_per_sec": 0, 00:09:39.315 "rw_mbytes_per_sec": 0, 00:09:39.315 "r_mbytes_per_sec": 0, 00:09:39.315 "w_mbytes_per_sec": 0 00:09:39.315 }, 00:09:39.315 "claimed": true, 00:09:39.315 "claim_type": "exclusive_write", 00:09:39.315 "zoned": false, 00:09:39.315 "supported_io_types": { 00:09:39.315 "read": true, 00:09:39.315 "write": true, 00:09:39.315 "unmap": true, 00:09:39.315 "flush": true, 00:09:39.315 "reset": true, 00:09:39.315 "nvme_admin": false, 00:09:39.315 "nvme_io": false, 00:09:39.315 "nvme_io_md": false, 00:09:39.315 "write_zeroes": true, 00:09:39.315 "zcopy": true, 00:09:39.315 "get_zone_info": false, 00:09:39.315 "zone_management": false, 00:09:39.315 "zone_append": false, 00:09:39.315 "compare": false, 00:09:39.315 "compare_and_write": false, 00:09:39.315 "abort": true, 00:09:39.315 "seek_hole": false, 00:09:39.315 "seek_data": false, 00:09:39.315 "copy": true, 00:09:39.315 "nvme_iov_md": false 00:09:39.315 }, 00:09:39.315 "memory_domains": [ 00:09:39.315 { 00:09:39.315 "dma_device_id": "system", 00:09:39.315 "dma_device_type": 1 00:09:39.315 }, 00:09:39.315 { 00:09:39.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.315 "dma_device_type": 2 00:09:39.315 } 00:09:39.315 ], 00:09:39.315 "driver_specific": {} 00:09:39.315 } 00:09:39.315 ] 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.315 "name": "Existed_Raid", 00:09:39.315 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:39.315 "strip_size_kb": 64, 00:09:39.315 "state": "online", 00:09:39.315 "raid_level": "raid0", 00:09:39.315 "superblock": true, 00:09:39.315 "num_base_bdevs": 3, 00:09:39.315 "num_base_bdevs_discovered": 3, 00:09:39.315 "num_base_bdevs_operational": 3, 00:09:39.315 "base_bdevs_list": [ 00:09:39.315 { 00:09:39.315 "name": "NewBaseBdev", 00:09:39.315 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:39.315 "is_configured": true, 00:09:39.315 "data_offset": 2048, 00:09:39.315 "data_size": 63488 00:09:39.315 }, 00:09:39.315 { 00:09:39.315 "name": "BaseBdev2", 00:09:39.315 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:39.315 "is_configured": true, 00:09:39.315 "data_offset": 2048, 00:09:39.315 "data_size": 63488 00:09:39.315 }, 00:09:39.315 { 00:09:39.315 "name": "BaseBdev3", 00:09:39.315 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:39.315 "is_configured": true, 00:09:39.315 "data_offset": 2048, 00:09:39.315 "data_size": 63488 00:09:39.315 } 00:09:39.315 ] 00:09:39.315 }' 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.315 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.884 [2024-11-15 11:21:22.585053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.884 "name": "Existed_Raid", 00:09:39.884 "aliases": [ 00:09:39.884 "a6a86155-e80d-4563-8c09-05859ea43cfa" 00:09:39.884 ], 00:09:39.884 "product_name": "Raid Volume", 00:09:39.884 "block_size": 512, 00:09:39.884 "num_blocks": 190464, 00:09:39.884 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:39.884 "assigned_rate_limits": { 00:09:39.884 "rw_ios_per_sec": 0, 00:09:39.884 "rw_mbytes_per_sec": 0, 00:09:39.884 "r_mbytes_per_sec": 0, 00:09:39.884 "w_mbytes_per_sec": 0 00:09:39.884 }, 00:09:39.884 "claimed": false, 00:09:39.884 "zoned": false, 00:09:39.884 "supported_io_types": { 00:09:39.884 "read": true, 00:09:39.884 "write": true, 00:09:39.884 "unmap": true, 00:09:39.884 "flush": true, 00:09:39.884 "reset": true, 00:09:39.884 "nvme_admin": false, 00:09:39.884 "nvme_io": false, 00:09:39.884 "nvme_io_md": false, 00:09:39.884 "write_zeroes": true, 00:09:39.884 "zcopy": false, 00:09:39.884 "get_zone_info": false, 00:09:39.884 "zone_management": false, 00:09:39.884 "zone_append": false, 00:09:39.884 "compare": false, 00:09:39.884 "compare_and_write": false, 00:09:39.884 "abort": false, 00:09:39.884 "seek_hole": false, 00:09:39.884 "seek_data": false, 00:09:39.884 "copy": false, 00:09:39.884 "nvme_iov_md": false 00:09:39.884 }, 00:09:39.884 "memory_domains": [ 00:09:39.884 { 00:09:39.884 "dma_device_id": "system", 00:09:39.884 "dma_device_type": 1 00:09:39.884 }, 00:09:39.884 { 00:09:39.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.884 "dma_device_type": 2 00:09:39.884 }, 00:09:39.884 { 00:09:39.884 "dma_device_id": "system", 00:09:39.884 "dma_device_type": 1 00:09:39.884 }, 00:09:39.884 { 00:09:39.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.884 "dma_device_type": 2 00:09:39.884 }, 00:09:39.884 { 00:09:39.884 "dma_device_id": "system", 00:09:39.884 "dma_device_type": 1 00:09:39.884 }, 00:09:39.884 { 00:09:39.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.884 "dma_device_type": 2 00:09:39.884 } 00:09:39.884 ], 00:09:39.884 "driver_specific": { 00:09:39.884 "raid": { 00:09:39.884 "uuid": "a6a86155-e80d-4563-8c09-05859ea43cfa", 00:09:39.884 "strip_size_kb": 64, 00:09:39.884 "state": "online", 00:09:39.884 "raid_level": "raid0", 00:09:39.884 "superblock": true, 00:09:39.884 "num_base_bdevs": 3, 00:09:39.884 "num_base_bdevs_discovered": 3, 00:09:39.884 "num_base_bdevs_operational": 3, 00:09:39.884 "base_bdevs_list": [ 00:09:39.884 { 00:09:39.884 "name": "NewBaseBdev", 00:09:39.884 "uuid": "605f5fcf-ecb4-4a27-b28f-20c93588af65", 00:09:39.884 "is_configured": true, 00:09:39.884 "data_offset": 2048, 00:09:39.884 "data_size": 63488 00:09:39.884 }, 00:09:39.884 { 00:09:39.884 "name": "BaseBdev2", 00:09:39.884 "uuid": "17dabf9c-ed1e-4b66-8f4c-94ee36ae219a", 00:09:39.884 "is_configured": true, 00:09:39.884 "data_offset": 2048, 00:09:39.884 "data_size": 63488 00:09:39.884 }, 00:09:39.884 { 00:09:39.884 "name": "BaseBdev3", 00:09:39.884 "uuid": "dcda2552-2395-4baf-9360-9db82e65da9e", 00:09:39.884 "is_configured": true, 00:09:39.884 "data_offset": 2048, 00:09:39.884 "data_size": 63488 00:09:39.884 } 00:09:39.884 ] 00:09:39.884 } 00:09:39.884 } 00:09:39.884 }' 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.884 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:39.884 BaseBdev2 00:09:39.884 BaseBdev3' 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.885 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.144 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.144 [2024-11-15 11:21:22.904736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.144 [2024-11-15 11:21:22.904773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.144 [2024-11-15 11:21:22.904897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.145 [2024-11-15 11:21:22.904990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.145 [2024-11-15 11:21:22.905010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64306 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64306 ']' 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64306 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64306 00:09:40.145 killing process with pid 64306 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64306' 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64306 00:09:40.145 [2024-11-15 11:21:22.945816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.145 11:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64306 00:09:40.403 [2024-11-15 11:21:23.221587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.405 ************************************ 00:09:41.405 END TEST raid_state_function_test_sb 00:09:41.405 ************************************ 00:09:41.405 11:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:41.405 00:09:41.405 real 0m12.005s 00:09:41.405 user 0m19.755s 00:09:41.405 sys 0m1.728s 00:09:41.405 11:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:41.405 11:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.663 11:21:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:41.663 11:21:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:41.663 11:21:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:41.663 11:21:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.663 ************************************ 00:09:41.663 START TEST raid_superblock_test 00:09:41.663 ************************************ 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64943 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64943 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 64943 ']' 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:41.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:41.663 11:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.663 [2024-11-15 11:21:24.498079] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:41.663 [2024-11-15 11:21:24.498284] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64943 ] 00:09:41.923 [2024-11-15 11:21:24.676884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.923 [2024-11-15 11:21:24.813886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.181 [2024-11-15 11:21:25.020672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.181 [2024-11-15 11:21:25.020747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.748 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:42.748 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:42.748 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:42.748 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.748 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:42.748 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 malloc1 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 [2024-11-15 11:21:25.468205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:42.749 [2024-11-15 11:21:25.468309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.749 [2024-11-15 11:21:25.468342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:42.749 [2024-11-15 11:21:25.468358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.749 [2024-11-15 11:21:25.471498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.749 [2024-11-15 11:21:25.471557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:42.749 pt1 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 malloc2 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 [2024-11-15 11:21:25.529055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.749 [2024-11-15 11:21:25.529140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.749 [2024-11-15 11:21:25.529193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:42.749 [2024-11-15 11:21:25.529241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.749 [2024-11-15 11:21:25.532414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.749 [2024-11-15 11:21:25.532458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.749 pt2 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 malloc3 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 [2024-11-15 11:21:25.596698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:42.749 [2024-11-15 11:21:25.596776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.749 [2024-11-15 11:21:25.596809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:42.749 [2024-11-15 11:21:25.596824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.749 [2024-11-15 11:21:25.599897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.749 [2024-11-15 11:21:25.599944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:42.749 pt3 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 [2024-11-15 11:21:25.608810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:42.749 [2024-11-15 11:21:25.611667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.749 [2024-11-15 11:21:25.611761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:42.749 [2024-11-15 11:21:25.611960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:42.749 [2024-11-15 11:21:25.611987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:42.749 [2024-11-15 11:21:25.612499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:42.749 [2024-11-15 11:21:25.612913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:42.749 [2024-11-15 11:21:25.612937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:42.749 [2024-11-15 11:21:25.613261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.749 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.749 "name": "raid_bdev1", 00:09:42.749 "uuid": "3a40eaf8-067e-4807-a763-ed1f2b04068e", 00:09:42.749 "strip_size_kb": 64, 00:09:42.749 "state": "online", 00:09:42.749 "raid_level": "raid0", 00:09:42.749 "superblock": true, 00:09:42.749 "num_base_bdevs": 3, 00:09:42.749 "num_base_bdevs_discovered": 3, 00:09:42.749 "num_base_bdevs_operational": 3, 00:09:42.749 "base_bdevs_list": [ 00:09:42.749 { 00:09:42.749 "name": "pt1", 00:09:42.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.749 "is_configured": true, 00:09:42.749 "data_offset": 2048, 00:09:42.749 "data_size": 63488 00:09:42.750 }, 00:09:42.750 { 00:09:42.750 "name": "pt2", 00:09:42.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.750 "is_configured": true, 00:09:42.750 "data_offset": 2048, 00:09:42.750 "data_size": 63488 00:09:42.750 }, 00:09:42.750 { 00:09:42.750 "name": "pt3", 00:09:42.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.750 "is_configured": true, 00:09:42.750 "data_offset": 2048, 00:09:42.750 "data_size": 63488 00:09:42.750 } 00:09:42.750 ] 00:09:42.750 }' 00:09:42.750 11:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.750 11:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.317 [2024-11-15 11:21:26.141840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.317 "name": "raid_bdev1", 00:09:43.317 "aliases": [ 00:09:43.317 "3a40eaf8-067e-4807-a763-ed1f2b04068e" 00:09:43.317 ], 00:09:43.317 "product_name": "Raid Volume", 00:09:43.317 "block_size": 512, 00:09:43.317 "num_blocks": 190464, 00:09:43.317 "uuid": "3a40eaf8-067e-4807-a763-ed1f2b04068e", 00:09:43.317 "assigned_rate_limits": { 00:09:43.317 "rw_ios_per_sec": 0, 00:09:43.317 "rw_mbytes_per_sec": 0, 00:09:43.317 "r_mbytes_per_sec": 0, 00:09:43.317 "w_mbytes_per_sec": 0 00:09:43.317 }, 00:09:43.317 "claimed": false, 00:09:43.317 "zoned": false, 00:09:43.317 "supported_io_types": { 00:09:43.317 "read": true, 00:09:43.317 "write": true, 00:09:43.317 "unmap": true, 00:09:43.317 "flush": true, 00:09:43.317 "reset": true, 00:09:43.317 "nvme_admin": false, 00:09:43.317 "nvme_io": false, 00:09:43.317 "nvme_io_md": false, 00:09:43.317 "write_zeroes": true, 00:09:43.317 "zcopy": false, 00:09:43.317 "get_zone_info": false, 00:09:43.317 "zone_management": false, 00:09:43.317 "zone_append": false, 00:09:43.317 "compare": false, 00:09:43.317 "compare_and_write": false, 00:09:43.317 "abort": false, 00:09:43.317 "seek_hole": false, 00:09:43.317 "seek_data": false, 00:09:43.317 "copy": false, 00:09:43.317 "nvme_iov_md": false 00:09:43.317 }, 00:09:43.317 "memory_domains": [ 00:09:43.317 { 00:09:43.317 "dma_device_id": "system", 00:09:43.317 "dma_device_type": 1 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.317 "dma_device_type": 2 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "dma_device_id": "system", 00:09:43.317 "dma_device_type": 1 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.317 "dma_device_type": 2 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "dma_device_id": "system", 00:09:43.317 "dma_device_type": 1 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.317 "dma_device_type": 2 00:09:43.317 } 00:09:43.317 ], 00:09:43.317 "driver_specific": { 00:09:43.317 "raid": { 00:09:43.317 "uuid": "3a40eaf8-067e-4807-a763-ed1f2b04068e", 00:09:43.317 "strip_size_kb": 64, 00:09:43.317 "state": "online", 00:09:43.317 "raid_level": "raid0", 00:09:43.317 "superblock": true, 00:09:43.317 "num_base_bdevs": 3, 00:09:43.317 "num_base_bdevs_discovered": 3, 00:09:43.317 "num_base_bdevs_operational": 3, 00:09:43.317 "base_bdevs_list": [ 00:09:43.317 { 00:09:43.317 "name": "pt1", 00:09:43.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.317 "is_configured": true, 00:09:43.317 "data_offset": 2048, 00:09:43.317 "data_size": 63488 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "name": "pt2", 00:09:43.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.317 "is_configured": true, 00:09:43.317 "data_offset": 2048, 00:09:43.317 "data_size": 63488 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "name": "pt3", 00:09:43.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.317 "is_configured": true, 00:09:43.317 "data_offset": 2048, 00:09:43.317 "data_size": 63488 00:09:43.317 } 00:09:43.317 ] 00:09:43.317 } 00:09:43.317 } 00:09:43.317 }' 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:43.317 pt2 00:09:43.317 pt3' 00:09:43.317 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:43.576 [2024-11-15 11:21:26.457733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a40eaf8-067e-4807-a763-ed1f2b04068e 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3a40eaf8-067e-4807-a763-ed1f2b04068e ']' 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.576 [2024-11-15 11:21:26.509404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.576 [2024-11-15 11:21:26.509436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.576 [2024-11-15 11:21:26.509538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.576 [2024-11-15 11:21:26.509661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.576 [2024-11-15 11:21:26.509675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:43.576 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.577 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:43.577 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.577 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.577 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.836 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.836 [2024-11-15 11:21:26.653511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:43.836 [2024-11-15 11:21:26.656264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:43.836 [2024-11-15 11:21:26.656338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:43.836 [2024-11-15 11:21:26.656411] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:43.836 [2024-11-15 11:21:26.656482] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:43.836 [2024-11-15 11:21:26.656516] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:43.836 [2024-11-15 11:21:26.656544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.836 [2024-11-15 11:21:26.656560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:43.836 request: 00:09:43.836 { 00:09:43.836 "name": "raid_bdev1", 00:09:43.836 "raid_level": "raid0", 00:09:43.836 "base_bdevs": [ 00:09:43.836 "malloc1", 00:09:43.836 "malloc2", 00:09:43.836 "malloc3" 00:09:43.836 ], 00:09:43.837 "strip_size_kb": 64, 00:09:43.837 "superblock": false, 00:09:43.837 "method": "bdev_raid_create", 00:09:43.837 "req_id": 1 00:09:43.837 } 00:09:43.837 Got JSON-RPC error response 00:09:43.837 response: 00:09:43.837 { 00:09:43.837 "code": -17, 00:09:43.837 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:43.837 } 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.837 [2024-11-15 11:21:26.721485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.837 [2024-11-15 11:21:26.721733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.837 [2024-11-15 11:21:26.721806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:43.837 [2024-11-15 11:21:26.722027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.837 [2024-11-15 11:21:26.725133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.837 [2024-11-15 11:21:26.725324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.837 [2024-11-15 11:21:26.725534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:43.837 pt1 00:09:43.837 [2024-11-15 11:21:26.725731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.837 "name": "raid_bdev1", 00:09:43.837 "uuid": "3a40eaf8-067e-4807-a763-ed1f2b04068e", 00:09:43.837 "strip_size_kb": 64, 00:09:43.837 "state": "configuring", 00:09:43.837 "raid_level": "raid0", 00:09:43.837 "superblock": true, 00:09:43.837 "num_base_bdevs": 3, 00:09:43.837 "num_base_bdevs_discovered": 1, 00:09:43.837 "num_base_bdevs_operational": 3, 00:09:43.837 "base_bdevs_list": [ 00:09:43.837 { 00:09:43.837 "name": "pt1", 00:09:43.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.837 "is_configured": true, 00:09:43.837 "data_offset": 2048, 00:09:43.837 "data_size": 63488 00:09:43.837 }, 00:09:43.837 { 00:09:43.837 "name": null, 00:09:43.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.837 "is_configured": false, 00:09:43.837 "data_offset": 2048, 00:09:43.837 "data_size": 63488 00:09:43.837 }, 00:09:43.837 { 00:09:43.837 "name": null, 00:09:43.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.837 "is_configured": false, 00:09:43.837 "data_offset": 2048, 00:09:43.837 "data_size": 63488 00:09:43.837 } 00:09:43.837 ] 00:09:43.837 }' 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.837 11:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.404 [2024-11-15 11:21:27.253797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.404 [2024-11-15 11:21:27.253884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.404 [2024-11-15 11:21:27.253947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:44.404 [2024-11-15 11:21:27.253963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.404 [2024-11-15 11:21:27.254618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.404 [2024-11-15 11:21:27.254649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.404 [2024-11-15 11:21:27.254800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.404 [2024-11-15 11:21:27.254846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.404 pt2 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.404 [2024-11-15 11:21:27.261799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.404 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.404 "name": "raid_bdev1", 00:09:44.404 "uuid": "3a40eaf8-067e-4807-a763-ed1f2b04068e", 00:09:44.404 "strip_size_kb": 64, 00:09:44.404 "state": "configuring", 00:09:44.404 "raid_level": "raid0", 00:09:44.404 "superblock": true, 00:09:44.404 "num_base_bdevs": 3, 00:09:44.404 "num_base_bdevs_discovered": 1, 00:09:44.404 "num_base_bdevs_operational": 3, 00:09:44.404 "base_bdevs_list": [ 00:09:44.404 { 00:09:44.404 "name": "pt1", 00:09:44.404 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.404 "is_configured": true, 00:09:44.404 "data_offset": 2048, 00:09:44.404 "data_size": 63488 00:09:44.404 }, 00:09:44.404 { 00:09:44.404 "name": null, 00:09:44.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.404 "is_configured": false, 00:09:44.405 "data_offset": 0, 00:09:44.405 "data_size": 63488 00:09:44.405 }, 00:09:44.405 { 00:09:44.405 "name": null, 00:09:44.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.405 "is_configured": false, 00:09:44.405 "data_offset": 2048, 00:09:44.405 "data_size": 63488 00:09:44.405 } 00:09:44.405 ] 00:09:44.405 }' 00:09:44.405 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.405 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.971 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:44.971 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.971 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.971 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.971 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.971 [2024-11-15 11:21:27.785931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.971 [2024-11-15 11:21:27.786007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.971 [2024-11-15 11:21:27.786034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:44.971 [2024-11-15 11:21:27.786052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.972 [2024-11-15 11:21:27.786666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.972 [2024-11-15 11:21:27.786697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.972 [2024-11-15 11:21:27.786785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.972 [2024-11-15 11:21:27.786820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.972 pt2 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.972 [2024-11-15 11:21:27.797932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.972 [2024-11-15 11:21:27.798006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.972 [2024-11-15 11:21:27.798028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:44.972 [2024-11-15 11:21:27.798043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.972 [2024-11-15 11:21:27.798556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.972 [2024-11-15 11:21:27.798609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.972 [2024-11-15 11:21:27.798686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:44.972 [2024-11-15 11:21:27.798720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.972 [2024-11-15 11:21:27.798892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:44.972 [2024-11-15 11:21:27.798913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.972 [2024-11-15 11:21:27.799325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:44.972 [2024-11-15 11:21:27.799527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:44.972 [2024-11-15 11:21:27.799543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:44.972 [2024-11-15 11:21:27.799741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.972 pt3 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.972 "name": "raid_bdev1", 00:09:44.972 "uuid": "3a40eaf8-067e-4807-a763-ed1f2b04068e", 00:09:44.972 "strip_size_kb": 64, 00:09:44.972 "state": "online", 00:09:44.972 "raid_level": "raid0", 00:09:44.972 "superblock": true, 00:09:44.972 "num_base_bdevs": 3, 00:09:44.972 "num_base_bdevs_discovered": 3, 00:09:44.972 "num_base_bdevs_operational": 3, 00:09:44.972 "base_bdevs_list": [ 00:09:44.972 { 00:09:44.972 "name": "pt1", 00:09:44.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.972 "is_configured": true, 00:09:44.972 "data_offset": 2048, 00:09:44.972 "data_size": 63488 00:09:44.972 }, 00:09:44.972 { 00:09:44.972 "name": "pt2", 00:09:44.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.972 "is_configured": true, 00:09:44.972 "data_offset": 2048, 00:09:44.972 "data_size": 63488 00:09:44.972 }, 00:09:44.972 { 00:09:44.972 "name": "pt3", 00:09:44.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.972 "is_configured": true, 00:09:44.972 "data_offset": 2048, 00:09:44.972 "data_size": 63488 00:09:44.972 } 00:09:44.972 ] 00:09:44.972 }' 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.972 11:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.538 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.539 [2024-11-15 11:21:28.334595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.539 "name": "raid_bdev1", 00:09:45.539 "aliases": [ 00:09:45.539 "3a40eaf8-067e-4807-a763-ed1f2b04068e" 00:09:45.539 ], 00:09:45.539 "product_name": "Raid Volume", 00:09:45.539 "block_size": 512, 00:09:45.539 "num_blocks": 190464, 00:09:45.539 "uuid": "3a40eaf8-067e-4807-a763-ed1f2b04068e", 00:09:45.539 "assigned_rate_limits": { 00:09:45.539 "rw_ios_per_sec": 0, 00:09:45.539 "rw_mbytes_per_sec": 0, 00:09:45.539 "r_mbytes_per_sec": 0, 00:09:45.539 "w_mbytes_per_sec": 0 00:09:45.539 }, 00:09:45.539 "claimed": false, 00:09:45.539 "zoned": false, 00:09:45.539 "supported_io_types": { 00:09:45.539 "read": true, 00:09:45.539 "write": true, 00:09:45.539 "unmap": true, 00:09:45.539 "flush": true, 00:09:45.539 "reset": true, 00:09:45.539 "nvme_admin": false, 00:09:45.539 "nvme_io": false, 00:09:45.539 "nvme_io_md": false, 00:09:45.539 "write_zeroes": true, 00:09:45.539 "zcopy": false, 00:09:45.539 "get_zone_info": false, 00:09:45.539 "zone_management": false, 00:09:45.539 "zone_append": false, 00:09:45.539 "compare": false, 00:09:45.539 "compare_and_write": false, 00:09:45.539 "abort": false, 00:09:45.539 "seek_hole": false, 00:09:45.539 "seek_data": false, 00:09:45.539 "copy": false, 00:09:45.539 "nvme_iov_md": false 00:09:45.539 }, 00:09:45.539 "memory_domains": [ 00:09:45.539 { 00:09:45.539 "dma_device_id": "system", 00:09:45.539 "dma_device_type": 1 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.539 "dma_device_type": 2 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "dma_device_id": "system", 00:09:45.539 "dma_device_type": 1 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.539 "dma_device_type": 2 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "dma_device_id": "system", 00:09:45.539 "dma_device_type": 1 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.539 "dma_device_type": 2 00:09:45.539 } 00:09:45.539 ], 00:09:45.539 "driver_specific": { 00:09:45.539 "raid": { 00:09:45.539 "uuid": "3a40eaf8-067e-4807-a763-ed1f2b04068e", 00:09:45.539 "strip_size_kb": 64, 00:09:45.539 "state": "online", 00:09:45.539 "raid_level": "raid0", 00:09:45.539 "superblock": true, 00:09:45.539 "num_base_bdevs": 3, 00:09:45.539 "num_base_bdevs_discovered": 3, 00:09:45.539 "num_base_bdevs_operational": 3, 00:09:45.539 "base_bdevs_list": [ 00:09:45.539 { 00:09:45.539 "name": "pt1", 00:09:45.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.539 "is_configured": true, 00:09:45.539 "data_offset": 2048, 00:09:45.539 "data_size": 63488 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "name": "pt2", 00:09:45.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.539 "is_configured": true, 00:09:45.539 "data_offset": 2048, 00:09:45.539 "data_size": 63488 00:09:45.539 }, 00:09:45.539 { 00:09:45.539 "name": "pt3", 00:09:45.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.539 "is_configured": true, 00:09:45.539 "data_offset": 2048, 00:09:45.539 "data_size": 63488 00:09:45.539 } 00:09:45.539 ] 00:09:45.539 } 00:09:45.539 } 00:09:45.539 }' 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.539 pt2 00:09:45.539 pt3' 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.539 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.798 [2024-11-15 11:21:28.650577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3a40eaf8-067e-4807-a763-ed1f2b04068e '!=' 3a40eaf8-067e-4807-a763-ed1f2b04068e ']' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64943 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 64943 ']' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 64943 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64943 00:09:45.798 killing process with pid 64943 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64943' 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 64943 00:09:45.798 [2024-11-15 11:21:28.727953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.798 11:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 64943 00:09:45.798 [2024-11-15 11:21:28.728064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.798 [2024-11-15 11:21:28.728142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.798 [2024-11-15 11:21:28.728162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:46.057 [2024-11-15 11:21:28.974691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.434 11:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:47.434 00:09:47.434 real 0m5.659s 00:09:47.434 user 0m8.445s 00:09:47.434 sys 0m0.895s 00:09:47.434 11:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.434 ************************************ 00:09:47.434 END TEST raid_superblock_test 00:09:47.434 ************************************ 00:09:47.434 11:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.434 11:21:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:47.434 11:21:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:47.434 11:21:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.434 11:21:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.434 ************************************ 00:09:47.434 START TEST raid_read_error_test 00:09:47.434 ************************************ 00:09:47.434 11:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:09:47.434 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:47.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ElfvkYDgwx 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65196 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65196 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65196 ']' 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:47.435 11:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 [2024-11-15 11:21:30.233622] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:47.435 [2024-11-15 11:21:30.233797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65196 ] 00:09:47.693 [2024-11-15 11:21:30.418048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.693 [2024-11-15 11:21:30.555149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.951 [2024-11-15 11:21:30.776122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.951 [2024-11-15 11:21:30.776163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 BaseBdev1_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 true 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 [2024-11-15 11:21:31.234872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.518 [2024-11-15 11:21:31.235161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.518 [2024-11-15 11:21:31.235250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:48.518 [2024-11-15 11:21:31.235274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.518 [2024-11-15 11:21:31.238331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.518 [2024-11-15 11:21:31.238384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.518 BaseBdev1 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 BaseBdev2_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 true 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 [2024-11-15 11:21:31.300030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:48.518 [2024-11-15 11:21:31.300111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.518 [2024-11-15 11:21:31.300137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:48.518 [2024-11-15 11:21:31.300153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.518 [2024-11-15 11:21:31.303247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.518 [2024-11-15 11:21:31.303293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:48.518 BaseBdev2 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 BaseBdev3_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 true 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 [2024-11-15 11:21:31.372181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:48.518 [2024-11-15 11:21:31.372297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.518 [2024-11-15 11:21:31.372325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:48.518 [2024-11-15 11:21:31.372360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.518 [2024-11-15 11:21:31.375347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.518 [2024-11-15 11:21:31.375395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:48.518 BaseBdev3 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 [2024-11-15 11:21:31.380295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.518 [2024-11-15 11:21:31.382916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.518 [2024-11-15 11:21:31.383269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.518 [2024-11-15 11:21:31.383601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.518 [2024-11-15 11:21:31.383623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:48.518 [2024-11-15 11:21:31.383928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:48.518 [2024-11-15 11:21:31.384131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.518 [2024-11-15 11:21:31.384153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.518 [2024-11-15 11:21:31.384400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.518 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.518 "name": "raid_bdev1", 00:09:48.518 "uuid": "379d018e-9906-4b3e-bdec-d9a7628cf29d", 00:09:48.518 "strip_size_kb": 64, 00:09:48.518 "state": "online", 00:09:48.518 "raid_level": "raid0", 00:09:48.518 "superblock": true, 00:09:48.518 "num_base_bdevs": 3, 00:09:48.518 "num_base_bdevs_discovered": 3, 00:09:48.518 "num_base_bdevs_operational": 3, 00:09:48.518 "base_bdevs_list": [ 00:09:48.518 { 00:09:48.518 "name": "BaseBdev1", 00:09:48.518 "uuid": "b0405e01-6d53-5e9d-8ff4-f8d91e02a128", 00:09:48.518 "is_configured": true, 00:09:48.518 "data_offset": 2048, 00:09:48.518 "data_size": 63488 00:09:48.518 }, 00:09:48.518 { 00:09:48.518 "name": "BaseBdev2", 00:09:48.518 "uuid": "cfea5ab7-4dcf-5de7-baf2-9e6368c3d7ac", 00:09:48.518 "is_configured": true, 00:09:48.518 "data_offset": 2048, 00:09:48.518 "data_size": 63488 00:09:48.519 }, 00:09:48.519 { 00:09:48.519 "name": "BaseBdev3", 00:09:48.519 "uuid": "0856b173-0db3-5787-9408-a39ed2300a66", 00:09:48.519 "is_configured": true, 00:09:48.519 "data_offset": 2048, 00:09:48.519 "data_size": 63488 00:09:48.519 } 00:09:48.519 ] 00:09:48.519 }' 00:09:48.519 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.519 11:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.083 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.083 11:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.083 [2024-11-15 11:21:32.010067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.017 11:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.276 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.276 "name": "raid_bdev1", 00:09:50.276 "uuid": "379d018e-9906-4b3e-bdec-d9a7628cf29d", 00:09:50.276 "strip_size_kb": 64, 00:09:50.276 "state": "online", 00:09:50.276 "raid_level": "raid0", 00:09:50.276 "superblock": true, 00:09:50.276 "num_base_bdevs": 3, 00:09:50.276 "num_base_bdevs_discovered": 3, 00:09:50.276 "num_base_bdevs_operational": 3, 00:09:50.276 "base_bdevs_list": [ 00:09:50.276 { 00:09:50.276 "name": "BaseBdev1", 00:09:50.276 "uuid": "b0405e01-6d53-5e9d-8ff4-f8d91e02a128", 00:09:50.276 "is_configured": true, 00:09:50.276 "data_offset": 2048, 00:09:50.276 "data_size": 63488 00:09:50.276 }, 00:09:50.276 { 00:09:50.276 "name": "BaseBdev2", 00:09:50.276 "uuid": "cfea5ab7-4dcf-5de7-baf2-9e6368c3d7ac", 00:09:50.276 "is_configured": true, 00:09:50.276 "data_offset": 2048, 00:09:50.276 "data_size": 63488 00:09:50.276 }, 00:09:50.276 { 00:09:50.276 "name": "BaseBdev3", 00:09:50.276 "uuid": "0856b173-0db3-5787-9408-a39ed2300a66", 00:09:50.276 "is_configured": true, 00:09:50.276 "data_offset": 2048, 00:09:50.276 "data_size": 63488 00:09:50.276 } 00:09:50.276 ] 00:09:50.276 }' 00:09:50.276 11:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.276 11:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.534 [2024-11-15 11:21:33.458847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.534 [2024-11-15 11:21:33.459126] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.534 [2024-11-15 11:21:33.462793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.534 [2024-11-15 11:21:33.462913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.534 [2024-11-15 11:21:33.463022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.534 [2024-11-15 11:21:33.463037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:50.534 { 00:09:50.534 "results": [ 00:09:50.534 { 00:09:50.534 "job": "raid_bdev1", 00:09:50.534 "core_mask": "0x1", 00:09:50.534 "workload": "randrw", 00:09:50.534 "percentage": 50, 00:09:50.534 "status": "finished", 00:09:50.534 "queue_depth": 1, 00:09:50.534 "io_size": 131072, 00:09:50.534 "runtime": 1.446443, 00:09:50.534 "iops": 10303.89721544506, 00:09:50.534 "mibps": 1287.9871519306325, 00:09:50.534 "io_failed": 1, 00:09:50.534 "io_timeout": 0, 00:09:50.534 "avg_latency_us": 135.88430191211003, 00:09:50.534 "min_latency_us": 30.952727272727273, 00:09:50.534 "max_latency_us": 1876.7127272727273 00:09:50.534 } 00:09:50.534 ], 00:09:50.534 "core_count": 1 00:09:50.534 } 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65196 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65196 ']' 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65196 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.534 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65196 00:09:50.793 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:50.793 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:50.793 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65196' 00:09:50.793 killing process with pid 65196 00:09:50.793 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65196 00:09:50.793 [2024-11-15 11:21:33.507497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.793 11:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65196 00:09:50.793 [2024-11-15 11:21:33.709733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ElfvkYDgwx 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:52.168 ************************************ 00:09:52.168 END TEST raid_read_error_test 00:09:52.168 ************************************ 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:09:52.168 00:09:52.168 real 0m4.754s 00:09:52.168 user 0m5.785s 00:09:52.168 sys 0m0.673s 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.168 11:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.168 11:21:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:52.168 11:21:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:52.169 11:21:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.169 11:21:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.169 ************************************ 00:09:52.169 START TEST raid_write_error_test 00:09:52.169 ************************************ 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GGQ7OcODRz 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65342 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65342 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65342 ']' 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:52.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:52.169 11:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.169 [2024-11-15 11:21:35.038403] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:52.169 [2024-11-15 11:21:35.038591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65342 ] 00:09:52.427 [2024-11-15 11:21:35.225664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.427 [2024-11-15 11:21:35.373019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.685 [2024-11-15 11:21:35.598225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.685 [2024-11-15 11:21:35.598309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.252 BaseBdev1_malloc 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.252 true 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.252 [2024-11-15 11:21:36.121038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.252 [2024-11-15 11:21:36.121125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.252 [2024-11-15 11:21:36.121163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:53.252 [2024-11-15 11:21:36.121209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.252 [2024-11-15 11:21:36.124481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.252 [2024-11-15 11:21:36.124534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.252 BaseBdev1 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.252 BaseBdev2_malloc 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.252 true 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.252 [2024-11-15 11:21:36.188511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.252 [2024-11-15 11:21:36.188586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.252 [2024-11-15 11:21:36.188613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:53.252 [2024-11-15 11:21:36.188632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.252 [2024-11-15 11:21:36.191562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.252 [2024-11-15 11:21:36.191613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.252 BaseBdev2 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.252 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.510 BaseBdev3_malloc 00:09:53.510 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.510 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:53.510 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.510 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.510 true 00:09:53.510 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.510 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:53.510 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.510 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.510 [2024-11-15 11:21:36.265573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:53.510 [2024-11-15 11:21:36.265646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.510 [2024-11-15 11:21:36.265675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:53.511 [2024-11-15 11:21:36.265693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.511 [2024-11-15 11:21:36.268671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.511 [2024-11-15 11:21:36.268721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:53.511 BaseBdev3 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.511 [2024-11-15 11:21:36.277693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.511 [2024-11-15 11:21:36.280416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.511 [2024-11-15 11:21:36.280531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.511 [2024-11-15 11:21:36.280806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:53.511 [2024-11-15 11:21:36.280828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:53.511 [2024-11-15 11:21:36.281136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:53.511 [2024-11-15 11:21:36.281520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:53.511 [2024-11-15 11:21:36.281671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:53.511 [2024-11-15 11:21:36.282044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.511 "name": "raid_bdev1", 00:09:53.511 "uuid": "aa8c7fd9-4ee6-412c-9b5b-36d9a57ee34a", 00:09:53.511 "strip_size_kb": 64, 00:09:53.511 "state": "online", 00:09:53.511 "raid_level": "raid0", 00:09:53.511 "superblock": true, 00:09:53.511 "num_base_bdevs": 3, 00:09:53.511 "num_base_bdevs_discovered": 3, 00:09:53.511 "num_base_bdevs_operational": 3, 00:09:53.511 "base_bdevs_list": [ 00:09:53.511 { 00:09:53.511 "name": "BaseBdev1", 00:09:53.511 "uuid": "ccaa8ec0-11ad-53a4-b384-c457fa8c53ec", 00:09:53.511 "is_configured": true, 00:09:53.511 "data_offset": 2048, 00:09:53.511 "data_size": 63488 00:09:53.511 }, 00:09:53.511 { 00:09:53.511 "name": "BaseBdev2", 00:09:53.511 "uuid": "5ad32e07-bbec-5689-b5dd-20ae139dbc1f", 00:09:53.511 "is_configured": true, 00:09:53.511 "data_offset": 2048, 00:09:53.511 "data_size": 63488 00:09:53.511 }, 00:09:53.511 { 00:09:53.511 "name": "BaseBdev3", 00:09:53.511 "uuid": "dfaffe40-5843-55e8-a80f-81260f24ecd6", 00:09:53.511 "is_configured": true, 00:09:53.511 "data_offset": 2048, 00:09:53.511 "data_size": 63488 00:09:53.511 } 00:09:53.511 ] 00:09:53.511 }' 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.511 11:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.077 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:54.077 11:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:54.077 [2024-11-15 11:21:36.907672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.011 "name": "raid_bdev1", 00:09:55.011 "uuid": "aa8c7fd9-4ee6-412c-9b5b-36d9a57ee34a", 00:09:55.011 "strip_size_kb": 64, 00:09:55.011 "state": "online", 00:09:55.011 "raid_level": "raid0", 00:09:55.011 "superblock": true, 00:09:55.011 "num_base_bdevs": 3, 00:09:55.011 "num_base_bdevs_discovered": 3, 00:09:55.011 "num_base_bdevs_operational": 3, 00:09:55.011 "base_bdevs_list": [ 00:09:55.011 { 00:09:55.011 "name": "BaseBdev1", 00:09:55.011 "uuid": "ccaa8ec0-11ad-53a4-b384-c457fa8c53ec", 00:09:55.011 "is_configured": true, 00:09:55.011 "data_offset": 2048, 00:09:55.011 "data_size": 63488 00:09:55.011 }, 00:09:55.011 { 00:09:55.011 "name": "BaseBdev2", 00:09:55.011 "uuid": "5ad32e07-bbec-5689-b5dd-20ae139dbc1f", 00:09:55.011 "is_configured": true, 00:09:55.011 "data_offset": 2048, 00:09:55.011 "data_size": 63488 00:09:55.011 }, 00:09:55.011 { 00:09:55.011 "name": "BaseBdev3", 00:09:55.011 "uuid": "dfaffe40-5843-55e8-a80f-81260f24ecd6", 00:09:55.011 "is_configured": true, 00:09:55.011 "data_offset": 2048, 00:09:55.011 "data_size": 63488 00:09:55.011 } 00:09:55.011 ] 00:09:55.011 }' 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.011 11:21:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 [2024-11-15 11:21:38.303374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.579 [2024-11-15 11:21:38.303541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.579 [2024-11-15 11:21:38.307134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.579 [2024-11-15 11:21:38.307329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.579 [2024-11-15 11:21:38.307405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.579 [2024-11-15 11:21:38.307422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:55.579 { 00:09:55.579 "results": [ 00:09:55.579 { 00:09:55.579 "job": "raid_bdev1", 00:09:55.579 "core_mask": "0x1", 00:09:55.579 "workload": "randrw", 00:09:55.579 "percentage": 50, 00:09:55.579 "status": "finished", 00:09:55.579 "queue_depth": 1, 00:09:55.579 "io_size": 131072, 00:09:55.579 "runtime": 1.393508, 00:09:55.579 "iops": 10126.242547584943, 00:09:55.579 "mibps": 1265.780318448118, 00:09:55.579 "io_failed": 1, 00:09:55.579 "io_timeout": 0, 00:09:55.579 "avg_latency_us": 137.9651123479695, 00:09:55.579 "min_latency_us": 42.589090909090906, 00:09:55.579 "max_latency_us": 1869.2654545454545 00:09:55.579 } 00:09:55.579 ], 00:09:55.579 "core_count": 1 00:09:55.579 } 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65342 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65342 ']' 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65342 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65342 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:55.579 killing process with pid 65342 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65342' 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65342 00:09:55.579 [2024-11-15 11:21:38.342565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.579 11:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65342 00:09:55.892 [2024-11-15 11:21:38.556374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GGQ7OcODRz 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:56.843 00:09:56.843 real 0m4.834s 00:09:56.843 user 0m5.917s 00:09:56.843 sys 0m0.654s 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.843 11:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.843 ************************************ 00:09:56.843 END TEST raid_write_error_test 00:09:56.843 ************************************ 00:09:57.102 11:21:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:57.102 11:21:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:57.102 11:21:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:57.102 11:21:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.102 11:21:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.102 ************************************ 00:09:57.102 START TEST raid_state_function_test 00:09:57.102 ************************************ 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65491 00:09:57.102 Process raid pid: 65491 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65491' 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65491 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65491 ']' 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.102 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.102 [2024-11-15 11:21:39.924125] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:57.102 [2024-11-15 11:21:39.924334] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.361 [2024-11-15 11:21:40.114120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.618 [2024-11-15 11:21:40.327137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.618 [2024-11-15 11:21:40.557637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.618 [2024-11-15 11:21:40.557690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.183 11:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.183 11:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:58.183 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.183 11:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.183 11:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.183 [2024-11-15 11:21:40.965344] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.183 [2024-11-15 11:21:40.965416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.183 [2024-11-15 11:21:40.965435] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.183 [2024-11-15 11:21:40.965453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.184 [2024-11-15 11:21:40.965464] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.184 [2024-11-15 11:21:40.965480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.184 11:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.184 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.184 "name": "Existed_Raid", 00:09:58.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.184 "strip_size_kb": 64, 00:09:58.184 "state": "configuring", 00:09:58.184 "raid_level": "concat", 00:09:58.184 "superblock": false, 00:09:58.184 "num_base_bdevs": 3, 00:09:58.184 "num_base_bdevs_discovered": 0, 00:09:58.184 "num_base_bdevs_operational": 3, 00:09:58.184 "base_bdevs_list": [ 00:09:58.184 { 00:09:58.184 "name": "BaseBdev1", 00:09:58.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.184 "is_configured": false, 00:09:58.184 "data_offset": 0, 00:09:58.184 "data_size": 0 00:09:58.184 }, 00:09:58.184 { 00:09:58.184 "name": "BaseBdev2", 00:09:58.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.184 "is_configured": false, 00:09:58.184 "data_offset": 0, 00:09:58.184 "data_size": 0 00:09:58.184 }, 00:09:58.184 { 00:09:58.184 "name": "BaseBdev3", 00:09:58.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.184 "is_configured": false, 00:09:58.184 "data_offset": 0, 00:09:58.184 "data_size": 0 00:09:58.184 } 00:09:58.184 ] 00:09:58.184 }' 00:09:58.184 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.184 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 [2024-11-15 11:21:41.505252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.751 [2024-11-15 11:21:41.505305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 [2024-11-15 11:21:41.513236] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.751 [2024-11-15 11:21:41.513301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.751 [2024-11-15 11:21:41.513322] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.751 [2024-11-15 11:21:41.513339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.751 [2024-11-15 11:21:41.513349] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.751 [2024-11-15 11:21:41.513364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 [2024-11-15 11:21:41.562125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.751 BaseBdev1 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 [ 00:09:58.751 { 00:09:58.751 "name": "BaseBdev1", 00:09:58.751 "aliases": [ 00:09:58.751 "fcf452e2-d6a1-45fa-b660-3dc5c0b34dad" 00:09:58.751 ], 00:09:58.751 "product_name": "Malloc disk", 00:09:58.751 "block_size": 512, 00:09:58.751 "num_blocks": 65536, 00:09:58.751 "uuid": "fcf452e2-d6a1-45fa-b660-3dc5c0b34dad", 00:09:58.751 "assigned_rate_limits": { 00:09:58.751 "rw_ios_per_sec": 0, 00:09:58.751 "rw_mbytes_per_sec": 0, 00:09:58.751 "r_mbytes_per_sec": 0, 00:09:58.751 "w_mbytes_per_sec": 0 00:09:58.751 }, 00:09:58.751 "claimed": true, 00:09:58.751 "claim_type": "exclusive_write", 00:09:58.751 "zoned": false, 00:09:58.751 "supported_io_types": { 00:09:58.751 "read": true, 00:09:58.751 "write": true, 00:09:58.751 "unmap": true, 00:09:58.751 "flush": true, 00:09:58.751 "reset": true, 00:09:58.751 "nvme_admin": false, 00:09:58.751 "nvme_io": false, 00:09:58.751 "nvme_io_md": false, 00:09:58.751 "write_zeroes": true, 00:09:58.751 "zcopy": true, 00:09:58.751 "get_zone_info": false, 00:09:58.751 "zone_management": false, 00:09:58.751 "zone_append": false, 00:09:58.751 "compare": false, 00:09:58.751 "compare_and_write": false, 00:09:58.751 "abort": true, 00:09:58.751 "seek_hole": false, 00:09:58.751 "seek_data": false, 00:09:58.751 "copy": true, 00:09:58.751 "nvme_iov_md": false 00:09:58.751 }, 00:09:58.751 "memory_domains": [ 00:09:58.751 { 00:09:58.751 "dma_device_id": "system", 00:09:58.751 "dma_device_type": 1 00:09:58.751 }, 00:09:58.751 { 00:09:58.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.751 "dma_device_type": 2 00:09:58.751 } 00:09:58.751 ], 00:09:58.751 "driver_specific": {} 00:09:58.751 } 00:09:58.751 ] 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.751 "name": "Existed_Raid", 00:09:58.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.751 "strip_size_kb": 64, 00:09:58.751 "state": "configuring", 00:09:58.751 "raid_level": "concat", 00:09:58.751 "superblock": false, 00:09:58.751 "num_base_bdevs": 3, 00:09:58.751 "num_base_bdevs_discovered": 1, 00:09:58.751 "num_base_bdevs_operational": 3, 00:09:58.751 "base_bdevs_list": [ 00:09:58.751 { 00:09:58.751 "name": "BaseBdev1", 00:09:58.751 "uuid": "fcf452e2-d6a1-45fa-b660-3dc5c0b34dad", 00:09:58.751 "is_configured": true, 00:09:58.751 "data_offset": 0, 00:09:58.751 "data_size": 65536 00:09:58.751 }, 00:09:58.751 { 00:09:58.751 "name": "BaseBdev2", 00:09:58.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.751 "is_configured": false, 00:09:58.751 "data_offset": 0, 00:09:58.751 "data_size": 0 00:09:58.751 }, 00:09:58.751 { 00:09:58.751 "name": "BaseBdev3", 00:09:58.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.751 "is_configured": false, 00:09:58.751 "data_offset": 0, 00:09:58.751 "data_size": 0 00:09:58.751 } 00:09:58.751 ] 00:09:58.751 }' 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.751 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.317 [2024-11-15 11:21:42.094326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.317 [2024-11-15 11:21:42.094433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.317 [2024-11-15 11:21:42.102356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.317 [2024-11-15 11:21:42.104967] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.317 [2024-11-15 11:21:42.105027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.317 [2024-11-15 11:21:42.105045] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.317 [2024-11-15 11:21:42.105061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.317 "name": "Existed_Raid", 00:09:59.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.317 "strip_size_kb": 64, 00:09:59.317 "state": "configuring", 00:09:59.317 "raid_level": "concat", 00:09:59.317 "superblock": false, 00:09:59.317 "num_base_bdevs": 3, 00:09:59.317 "num_base_bdevs_discovered": 1, 00:09:59.317 "num_base_bdevs_operational": 3, 00:09:59.317 "base_bdevs_list": [ 00:09:59.317 { 00:09:59.317 "name": "BaseBdev1", 00:09:59.317 "uuid": "fcf452e2-d6a1-45fa-b660-3dc5c0b34dad", 00:09:59.317 "is_configured": true, 00:09:59.317 "data_offset": 0, 00:09:59.317 "data_size": 65536 00:09:59.317 }, 00:09:59.317 { 00:09:59.317 "name": "BaseBdev2", 00:09:59.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.317 "is_configured": false, 00:09:59.317 "data_offset": 0, 00:09:59.317 "data_size": 0 00:09:59.317 }, 00:09:59.317 { 00:09:59.317 "name": "BaseBdev3", 00:09:59.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.317 "is_configured": false, 00:09:59.317 "data_offset": 0, 00:09:59.317 "data_size": 0 00:09:59.317 } 00:09:59.317 ] 00:09:59.317 }' 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.317 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.886 [2024-11-15 11:21:42.644525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.886 BaseBdev2 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.886 [ 00:09:59.886 { 00:09:59.886 "name": "BaseBdev2", 00:09:59.886 "aliases": [ 00:09:59.886 "491fd786-b169-4023-85c7-931cd0a75ffd" 00:09:59.886 ], 00:09:59.886 "product_name": "Malloc disk", 00:09:59.886 "block_size": 512, 00:09:59.886 "num_blocks": 65536, 00:09:59.886 "uuid": "491fd786-b169-4023-85c7-931cd0a75ffd", 00:09:59.886 "assigned_rate_limits": { 00:09:59.886 "rw_ios_per_sec": 0, 00:09:59.886 "rw_mbytes_per_sec": 0, 00:09:59.886 "r_mbytes_per_sec": 0, 00:09:59.886 "w_mbytes_per_sec": 0 00:09:59.886 }, 00:09:59.886 "claimed": true, 00:09:59.886 "claim_type": "exclusive_write", 00:09:59.886 "zoned": false, 00:09:59.886 "supported_io_types": { 00:09:59.886 "read": true, 00:09:59.886 "write": true, 00:09:59.886 "unmap": true, 00:09:59.886 "flush": true, 00:09:59.886 "reset": true, 00:09:59.886 "nvme_admin": false, 00:09:59.886 "nvme_io": false, 00:09:59.886 "nvme_io_md": false, 00:09:59.886 "write_zeroes": true, 00:09:59.886 "zcopy": true, 00:09:59.886 "get_zone_info": false, 00:09:59.886 "zone_management": false, 00:09:59.886 "zone_append": false, 00:09:59.886 "compare": false, 00:09:59.886 "compare_and_write": false, 00:09:59.886 "abort": true, 00:09:59.886 "seek_hole": false, 00:09:59.886 "seek_data": false, 00:09:59.886 "copy": true, 00:09:59.886 "nvme_iov_md": false 00:09:59.886 }, 00:09:59.886 "memory_domains": [ 00:09:59.886 { 00:09:59.886 "dma_device_id": "system", 00:09:59.886 "dma_device_type": 1 00:09:59.886 }, 00:09:59.886 { 00:09:59.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.886 "dma_device_type": 2 00:09:59.886 } 00:09:59.886 ], 00:09:59.886 "driver_specific": {} 00:09:59.886 } 00:09:59.886 ] 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.886 "name": "Existed_Raid", 00:09:59.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.886 "strip_size_kb": 64, 00:09:59.886 "state": "configuring", 00:09:59.886 "raid_level": "concat", 00:09:59.886 "superblock": false, 00:09:59.886 "num_base_bdevs": 3, 00:09:59.886 "num_base_bdevs_discovered": 2, 00:09:59.886 "num_base_bdevs_operational": 3, 00:09:59.886 "base_bdevs_list": [ 00:09:59.886 { 00:09:59.886 "name": "BaseBdev1", 00:09:59.886 "uuid": "fcf452e2-d6a1-45fa-b660-3dc5c0b34dad", 00:09:59.886 "is_configured": true, 00:09:59.886 "data_offset": 0, 00:09:59.886 "data_size": 65536 00:09:59.886 }, 00:09:59.886 { 00:09:59.886 "name": "BaseBdev2", 00:09:59.886 "uuid": "491fd786-b169-4023-85c7-931cd0a75ffd", 00:09:59.886 "is_configured": true, 00:09:59.886 "data_offset": 0, 00:09:59.886 "data_size": 65536 00:09:59.886 }, 00:09:59.886 { 00:09:59.886 "name": "BaseBdev3", 00:09:59.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.886 "is_configured": false, 00:09:59.886 "data_offset": 0, 00:09:59.886 "data_size": 0 00:09:59.886 } 00:09:59.886 ] 00:09:59.886 }' 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.886 11:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.451 [2024-11-15 11:21:43.223528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.451 [2024-11-15 11:21:43.223594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.451 [2024-11-15 11:21:43.223616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:00.451 [2024-11-15 11:21:43.223980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:00.451 [2024-11-15 11:21:43.224275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.451 [2024-11-15 11:21:43.224304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.451 [2024-11-15 11:21:43.224639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.451 BaseBdev3 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:00.451 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.452 [ 00:10:00.452 { 00:10:00.452 "name": "BaseBdev3", 00:10:00.452 "aliases": [ 00:10:00.452 "9b6a0f98-b5f0-4e81-be35-f924c53f07e3" 00:10:00.452 ], 00:10:00.452 "product_name": "Malloc disk", 00:10:00.452 "block_size": 512, 00:10:00.452 "num_blocks": 65536, 00:10:00.452 "uuid": "9b6a0f98-b5f0-4e81-be35-f924c53f07e3", 00:10:00.452 "assigned_rate_limits": { 00:10:00.452 "rw_ios_per_sec": 0, 00:10:00.452 "rw_mbytes_per_sec": 0, 00:10:00.452 "r_mbytes_per_sec": 0, 00:10:00.452 "w_mbytes_per_sec": 0 00:10:00.452 }, 00:10:00.452 "claimed": true, 00:10:00.452 "claim_type": "exclusive_write", 00:10:00.452 "zoned": false, 00:10:00.452 "supported_io_types": { 00:10:00.452 "read": true, 00:10:00.452 "write": true, 00:10:00.452 "unmap": true, 00:10:00.452 "flush": true, 00:10:00.452 "reset": true, 00:10:00.452 "nvme_admin": false, 00:10:00.452 "nvme_io": false, 00:10:00.452 "nvme_io_md": false, 00:10:00.452 "write_zeroes": true, 00:10:00.452 "zcopy": true, 00:10:00.452 "get_zone_info": false, 00:10:00.452 "zone_management": false, 00:10:00.452 "zone_append": false, 00:10:00.452 "compare": false, 00:10:00.452 "compare_and_write": false, 00:10:00.452 "abort": true, 00:10:00.452 "seek_hole": false, 00:10:00.452 "seek_data": false, 00:10:00.452 "copy": true, 00:10:00.452 "nvme_iov_md": false 00:10:00.452 }, 00:10:00.452 "memory_domains": [ 00:10:00.452 { 00:10:00.452 "dma_device_id": "system", 00:10:00.452 "dma_device_type": 1 00:10:00.452 }, 00:10:00.452 { 00:10:00.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.452 "dma_device_type": 2 00:10:00.452 } 00:10:00.452 ], 00:10:00.452 "driver_specific": {} 00:10:00.452 } 00:10:00.452 ] 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.452 "name": "Existed_Raid", 00:10:00.452 "uuid": "24d633ef-8f76-4672-add5-12e7204a70e0", 00:10:00.452 "strip_size_kb": 64, 00:10:00.452 "state": "online", 00:10:00.452 "raid_level": "concat", 00:10:00.452 "superblock": false, 00:10:00.452 "num_base_bdevs": 3, 00:10:00.452 "num_base_bdevs_discovered": 3, 00:10:00.452 "num_base_bdevs_operational": 3, 00:10:00.452 "base_bdevs_list": [ 00:10:00.452 { 00:10:00.452 "name": "BaseBdev1", 00:10:00.452 "uuid": "fcf452e2-d6a1-45fa-b660-3dc5c0b34dad", 00:10:00.452 "is_configured": true, 00:10:00.452 "data_offset": 0, 00:10:00.452 "data_size": 65536 00:10:00.452 }, 00:10:00.452 { 00:10:00.452 "name": "BaseBdev2", 00:10:00.452 "uuid": "491fd786-b169-4023-85c7-931cd0a75ffd", 00:10:00.452 "is_configured": true, 00:10:00.452 "data_offset": 0, 00:10:00.452 "data_size": 65536 00:10:00.452 }, 00:10:00.452 { 00:10:00.452 "name": "BaseBdev3", 00:10:00.452 "uuid": "9b6a0f98-b5f0-4e81-be35-f924c53f07e3", 00:10:00.452 "is_configured": true, 00:10:00.452 "data_offset": 0, 00:10:00.452 "data_size": 65536 00:10:00.452 } 00:10:00.452 ] 00:10:00.452 }' 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.452 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.019 [2024-11-15 11:21:43.748147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.019 "name": "Existed_Raid", 00:10:01.019 "aliases": [ 00:10:01.019 "24d633ef-8f76-4672-add5-12e7204a70e0" 00:10:01.019 ], 00:10:01.019 "product_name": "Raid Volume", 00:10:01.019 "block_size": 512, 00:10:01.019 "num_blocks": 196608, 00:10:01.019 "uuid": "24d633ef-8f76-4672-add5-12e7204a70e0", 00:10:01.019 "assigned_rate_limits": { 00:10:01.019 "rw_ios_per_sec": 0, 00:10:01.019 "rw_mbytes_per_sec": 0, 00:10:01.019 "r_mbytes_per_sec": 0, 00:10:01.019 "w_mbytes_per_sec": 0 00:10:01.019 }, 00:10:01.019 "claimed": false, 00:10:01.019 "zoned": false, 00:10:01.019 "supported_io_types": { 00:10:01.019 "read": true, 00:10:01.019 "write": true, 00:10:01.019 "unmap": true, 00:10:01.019 "flush": true, 00:10:01.019 "reset": true, 00:10:01.019 "nvme_admin": false, 00:10:01.019 "nvme_io": false, 00:10:01.019 "nvme_io_md": false, 00:10:01.019 "write_zeroes": true, 00:10:01.019 "zcopy": false, 00:10:01.019 "get_zone_info": false, 00:10:01.019 "zone_management": false, 00:10:01.019 "zone_append": false, 00:10:01.019 "compare": false, 00:10:01.019 "compare_and_write": false, 00:10:01.019 "abort": false, 00:10:01.019 "seek_hole": false, 00:10:01.019 "seek_data": false, 00:10:01.019 "copy": false, 00:10:01.019 "nvme_iov_md": false 00:10:01.019 }, 00:10:01.019 "memory_domains": [ 00:10:01.019 { 00:10:01.019 "dma_device_id": "system", 00:10:01.019 "dma_device_type": 1 00:10:01.019 }, 00:10:01.019 { 00:10:01.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.019 "dma_device_type": 2 00:10:01.019 }, 00:10:01.019 { 00:10:01.019 "dma_device_id": "system", 00:10:01.019 "dma_device_type": 1 00:10:01.019 }, 00:10:01.019 { 00:10:01.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.019 "dma_device_type": 2 00:10:01.019 }, 00:10:01.019 { 00:10:01.019 "dma_device_id": "system", 00:10:01.019 "dma_device_type": 1 00:10:01.019 }, 00:10:01.019 { 00:10:01.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.019 "dma_device_type": 2 00:10:01.019 } 00:10:01.019 ], 00:10:01.019 "driver_specific": { 00:10:01.019 "raid": { 00:10:01.019 "uuid": "24d633ef-8f76-4672-add5-12e7204a70e0", 00:10:01.019 "strip_size_kb": 64, 00:10:01.019 "state": "online", 00:10:01.019 "raid_level": "concat", 00:10:01.019 "superblock": false, 00:10:01.019 "num_base_bdevs": 3, 00:10:01.019 "num_base_bdevs_discovered": 3, 00:10:01.019 "num_base_bdevs_operational": 3, 00:10:01.019 "base_bdevs_list": [ 00:10:01.019 { 00:10:01.019 "name": "BaseBdev1", 00:10:01.019 "uuid": "fcf452e2-d6a1-45fa-b660-3dc5c0b34dad", 00:10:01.019 "is_configured": true, 00:10:01.019 "data_offset": 0, 00:10:01.019 "data_size": 65536 00:10:01.019 }, 00:10:01.019 { 00:10:01.019 "name": "BaseBdev2", 00:10:01.019 "uuid": "491fd786-b169-4023-85c7-931cd0a75ffd", 00:10:01.019 "is_configured": true, 00:10:01.019 "data_offset": 0, 00:10:01.019 "data_size": 65536 00:10:01.019 }, 00:10:01.019 { 00:10:01.019 "name": "BaseBdev3", 00:10:01.019 "uuid": "9b6a0f98-b5f0-4e81-be35-f924c53f07e3", 00:10:01.019 "is_configured": true, 00:10:01.019 "data_offset": 0, 00:10:01.019 "data_size": 65536 00:10:01.019 } 00:10:01.019 ] 00:10:01.019 } 00:10:01.019 } 00:10:01.019 }' 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.019 BaseBdev2 00:10:01.019 BaseBdev3' 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.019 11:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.278 11:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.278 [2024-11-15 11:21:44.067949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.278 [2024-11-15 11:21:44.067987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.278 [2024-11-15 11:21:44.068074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.278 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.278 "name": "Existed_Raid", 00:10:01.278 "uuid": "24d633ef-8f76-4672-add5-12e7204a70e0", 00:10:01.278 "strip_size_kb": 64, 00:10:01.278 "state": "offline", 00:10:01.279 "raid_level": "concat", 00:10:01.279 "superblock": false, 00:10:01.279 "num_base_bdevs": 3, 00:10:01.279 "num_base_bdevs_discovered": 2, 00:10:01.279 "num_base_bdevs_operational": 2, 00:10:01.279 "base_bdevs_list": [ 00:10:01.279 { 00:10:01.279 "name": null, 00:10:01.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.279 "is_configured": false, 00:10:01.279 "data_offset": 0, 00:10:01.279 "data_size": 65536 00:10:01.279 }, 00:10:01.279 { 00:10:01.279 "name": "BaseBdev2", 00:10:01.279 "uuid": "491fd786-b169-4023-85c7-931cd0a75ffd", 00:10:01.279 "is_configured": true, 00:10:01.279 "data_offset": 0, 00:10:01.279 "data_size": 65536 00:10:01.279 }, 00:10:01.279 { 00:10:01.279 "name": "BaseBdev3", 00:10:01.279 "uuid": "9b6a0f98-b5f0-4e81-be35-f924c53f07e3", 00:10:01.279 "is_configured": true, 00:10:01.279 "data_offset": 0, 00:10:01.279 "data_size": 65536 00:10:01.279 } 00:10:01.279 ] 00:10:01.279 }' 00:10:01.279 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.279 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.847 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 [2024-11-15 11:21:44.711828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.105 [2024-11-15 11:21:44.857597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.105 [2024-11-15 11:21:44.857671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.105 11:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.105 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.105 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.106 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:02.106 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.106 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.106 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.106 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.106 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.364 BaseBdev2 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.364 [ 00:10:02.364 { 00:10:02.364 "name": "BaseBdev2", 00:10:02.364 "aliases": [ 00:10:02.364 "53647ead-d71e-4cbc-b43d-b31c32aa0cf6" 00:10:02.364 ], 00:10:02.364 "product_name": "Malloc disk", 00:10:02.364 "block_size": 512, 00:10:02.364 "num_blocks": 65536, 00:10:02.364 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:02.364 "assigned_rate_limits": { 00:10:02.364 "rw_ios_per_sec": 0, 00:10:02.364 "rw_mbytes_per_sec": 0, 00:10:02.364 "r_mbytes_per_sec": 0, 00:10:02.364 "w_mbytes_per_sec": 0 00:10:02.364 }, 00:10:02.364 "claimed": false, 00:10:02.364 "zoned": false, 00:10:02.364 "supported_io_types": { 00:10:02.364 "read": true, 00:10:02.364 "write": true, 00:10:02.364 "unmap": true, 00:10:02.364 "flush": true, 00:10:02.364 "reset": true, 00:10:02.364 "nvme_admin": false, 00:10:02.364 "nvme_io": false, 00:10:02.364 "nvme_io_md": false, 00:10:02.364 "write_zeroes": true, 00:10:02.364 "zcopy": true, 00:10:02.364 "get_zone_info": false, 00:10:02.364 "zone_management": false, 00:10:02.364 "zone_append": false, 00:10:02.364 "compare": false, 00:10:02.364 "compare_and_write": false, 00:10:02.364 "abort": true, 00:10:02.364 "seek_hole": false, 00:10:02.364 "seek_data": false, 00:10:02.364 "copy": true, 00:10:02.364 "nvme_iov_md": false 00:10:02.364 }, 00:10:02.364 "memory_domains": [ 00:10:02.364 { 00:10:02.364 "dma_device_id": "system", 00:10:02.364 "dma_device_type": 1 00:10:02.364 }, 00:10:02.364 { 00:10:02.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.364 "dma_device_type": 2 00:10:02.364 } 00:10:02.364 ], 00:10:02.364 "driver_specific": {} 00:10:02.364 } 00:10:02.364 ] 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.364 BaseBdev3 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.364 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.364 [ 00:10:02.364 { 00:10:02.364 "name": "BaseBdev3", 00:10:02.364 "aliases": [ 00:10:02.364 "a225ec97-ddef-4af5-839e-5fc332397a98" 00:10:02.365 ], 00:10:02.365 "product_name": "Malloc disk", 00:10:02.365 "block_size": 512, 00:10:02.365 "num_blocks": 65536, 00:10:02.365 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:02.365 "assigned_rate_limits": { 00:10:02.365 "rw_ios_per_sec": 0, 00:10:02.365 "rw_mbytes_per_sec": 0, 00:10:02.365 "r_mbytes_per_sec": 0, 00:10:02.365 "w_mbytes_per_sec": 0 00:10:02.365 }, 00:10:02.365 "claimed": false, 00:10:02.365 "zoned": false, 00:10:02.365 "supported_io_types": { 00:10:02.365 "read": true, 00:10:02.365 "write": true, 00:10:02.365 "unmap": true, 00:10:02.365 "flush": true, 00:10:02.365 "reset": true, 00:10:02.365 "nvme_admin": false, 00:10:02.365 "nvme_io": false, 00:10:02.365 "nvme_io_md": false, 00:10:02.365 "write_zeroes": true, 00:10:02.365 "zcopy": true, 00:10:02.365 "get_zone_info": false, 00:10:02.365 "zone_management": false, 00:10:02.365 "zone_append": false, 00:10:02.365 "compare": false, 00:10:02.365 "compare_and_write": false, 00:10:02.365 "abort": true, 00:10:02.365 "seek_hole": false, 00:10:02.365 "seek_data": false, 00:10:02.365 "copy": true, 00:10:02.365 "nvme_iov_md": false 00:10:02.365 }, 00:10:02.365 "memory_domains": [ 00:10:02.365 { 00:10:02.365 "dma_device_id": "system", 00:10:02.365 "dma_device_type": 1 00:10:02.365 }, 00:10:02.365 { 00:10:02.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.365 "dma_device_type": 2 00:10:02.365 } 00:10:02.365 ], 00:10:02.365 "driver_specific": {} 00:10:02.365 } 00:10:02.365 ] 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.365 [2024-11-15 11:21:45.166837] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.365 [2024-11-15 11:21:45.166909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.365 [2024-11-15 11:21:45.166941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.365 [2024-11-15 11:21:45.169498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.365 "name": "Existed_Raid", 00:10:02.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.365 "strip_size_kb": 64, 00:10:02.365 "state": "configuring", 00:10:02.365 "raid_level": "concat", 00:10:02.365 "superblock": false, 00:10:02.365 "num_base_bdevs": 3, 00:10:02.365 "num_base_bdevs_discovered": 2, 00:10:02.365 "num_base_bdevs_operational": 3, 00:10:02.365 "base_bdevs_list": [ 00:10:02.365 { 00:10:02.365 "name": "BaseBdev1", 00:10:02.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.365 "is_configured": false, 00:10:02.365 "data_offset": 0, 00:10:02.365 "data_size": 0 00:10:02.365 }, 00:10:02.365 { 00:10:02.365 "name": "BaseBdev2", 00:10:02.365 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:02.365 "is_configured": true, 00:10:02.365 "data_offset": 0, 00:10:02.365 "data_size": 65536 00:10:02.365 }, 00:10:02.365 { 00:10:02.365 "name": "BaseBdev3", 00:10:02.365 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:02.365 "is_configured": true, 00:10:02.365 "data_offset": 0, 00:10:02.365 "data_size": 65536 00:10:02.365 } 00:10:02.365 ] 00:10:02.365 }' 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.365 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.940 [2024-11-15 11:21:45.691055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.940 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.940 "name": "Existed_Raid", 00:10:02.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.940 "strip_size_kb": 64, 00:10:02.940 "state": "configuring", 00:10:02.940 "raid_level": "concat", 00:10:02.940 "superblock": false, 00:10:02.940 "num_base_bdevs": 3, 00:10:02.940 "num_base_bdevs_discovered": 1, 00:10:02.940 "num_base_bdevs_operational": 3, 00:10:02.940 "base_bdevs_list": [ 00:10:02.940 { 00:10:02.940 "name": "BaseBdev1", 00:10:02.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.940 "is_configured": false, 00:10:02.940 "data_offset": 0, 00:10:02.940 "data_size": 0 00:10:02.941 }, 00:10:02.941 { 00:10:02.941 "name": null, 00:10:02.941 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:02.941 "is_configured": false, 00:10:02.941 "data_offset": 0, 00:10:02.941 "data_size": 65536 00:10:02.941 }, 00:10:02.941 { 00:10:02.941 "name": "BaseBdev3", 00:10:02.941 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:02.941 "is_configured": true, 00:10:02.941 "data_offset": 0, 00:10:02.941 "data_size": 65536 00:10:02.941 } 00:10:02.941 ] 00:10:02.941 }' 00:10:02.941 11:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.941 11:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.509 [2024-11-15 11:21:46.311775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.509 BaseBdev1 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.509 [ 00:10:03.509 { 00:10:03.509 "name": "BaseBdev1", 00:10:03.509 "aliases": [ 00:10:03.509 "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28" 00:10:03.509 ], 00:10:03.509 "product_name": "Malloc disk", 00:10:03.509 "block_size": 512, 00:10:03.509 "num_blocks": 65536, 00:10:03.509 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:03.509 "assigned_rate_limits": { 00:10:03.509 "rw_ios_per_sec": 0, 00:10:03.509 "rw_mbytes_per_sec": 0, 00:10:03.509 "r_mbytes_per_sec": 0, 00:10:03.509 "w_mbytes_per_sec": 0 00:10:03.509 }, 00:10:03.509 "claimed": true, 00:10:03.509 "claim_type": "exclusive_write", 00:10:03.509 "zoned": false, 00:10:03.509 "supported_io_types": { 00:10:03.509 "read": true, 00:10:03.509 "write": true, 00:10:03.509 "unmap": true, 00:10:03.509 "flush": true, 00:10:03.509 "reset": true, 00:10:03.509 "nvme_admin": false, 00:10:03.509 "nvme_io": false, 00:10:03.509 "nvme_io_md": false, 00:10:03.509 "write_zeroes": true, 00:10:03.509 "zcopy": true, 00:10:03.509 "get_zone_info": false, 00:10:03.509 "zone_management": false, 00:10:03.509 "zone_append": false, 00:10:03.509 "compare": false, 00:10:03.509 "compare_and_write": false, 00:10:03.509 "abort": true, 00:10:03.509 "seek_hole": false, 00:10:03.509 "seek_data": false, 00:10:03.509 "copy": true, 00:10:03.509 "nvme_iov_md": false 00:10:03.509 }, 00:10:03.509 "memory_domains": [ 00:10:03.509 { 00:10:03.509 "dma_device_id": "system", 00:10:03.509 "dma_device_type": 1 00:10:03.509 }, 00:10:03.509 { 00:10:03.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.509 "dma_device_type": 2 00:10:03.509 } 00:10:03.509 ], 00:10:03.509 "driver_specific": {} 00:10:03.509 } 00:10:03.509 ] 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:03.509 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.510 "name": "Existed_Raid", 00:10:03.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.510 "strip_size_kb": 64, 00:10:03.510 "state": "configuring", 00:10:03.510 "raid_level": "concat", 00:10:03.510 "superblock": false, 00:10:03.510 "num_base_bdevs": 3, 00:10:03.510 "num_base_bdevs_discovered": 2, 00:10:03.510 "num_base_bdevs_operational": 3, 00:10:03.510 "base_bdevs_list": [ 00:10:03.510 { 00:10:03.510 "name": "BaseBdev1", 00:10:03.510 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:03.510 "is_configured": true, 00:10:03.510 "data_offset": 0, 00:10:03.510 "data_size": 65536 00:10:03.510 }, 00:10:03.510 { 00:10:03.510 "name": null, 00:10:03.510 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:03.510 "is_configured": false, 00:10:03.510 "data_offset": 0, 00:10:03.510 "data_size": 65536 00:10:03.510 }, 00:10:03.510 { 00:10:03.510 "name": "BaseBdev3", 00:10:03.510 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:03.510 "is_configured": true, 00:10:03.510 "data_offset": 0, 00:10:03.510 "data_size": 65536 00:10:03.510 } 00:10:03.510 ] 00:10:03.510 }' 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.510 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.077 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.077 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.078 [2024-11-15 11:21:46.908038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.078 "name": "Existed_Raid", 00:10:04.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.078 "strip_size_kb": 64, 00:10:04.078 "state": "configuring", 00:10:04.078 "raid_level": "concat", 00:10:04.078 "superblock": false, 00:10:04.078 "num_base_bdevs": 3, 00:10:04.078 "num_base_bdevs_discovered": 1, 00:10:04.078 "num_base_bdevs_operational": 3, 00:10:04.078 "base_bdevs_list": [ 00:10:04.078 { 00:10:04.078 "name": "BaseBdev1", 00:10:04.078 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:04.078 "is_configured": true, 00:10:04.078 "data_offset": 0, 00:10:04.078 "data_size": 65536 00:10:04.078 }, 00:10:04.078 { 00:10:04.078 "name": null, 00:10:04.078 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:04.078 "is_configured": false, 00:10:04.078 "data_offset": 0, 00:10:04.078 "data_size": 65536 00:10:04.078 }, 00:10:04.078 { 00:10:04.078 "name": null, 00:10:04.078 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:04.078 "is_configured": false, 00:10:04.078 "data_offset": 0, 00:10:04.078 "data_size": 65536 00:10:04.078 } 00:10:04.078 ] 00:10:04.078 }' 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.078 11:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.647 [2024-11-15 11:21:47.508223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.647 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.647 "name": "Existed_Raid", 00:10:04.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.647 "strip_size_kb": 64, 00:10:04.647 "state": "configuring", 00:10:04.647 "raid_level": "concat", 00:10:04.647 "superblock": false, 00:10:04.647 "num_base_bdevs": 3, 00:10:04.647 "num_base_bdevs_discovered": 2, 00:10:04.647 "num_base_bdevs_operational": 3, 00:10:04.647 "base_bdevs_list": [ 00:10:04.647 { 00:10:04.647 "name": "BaseBdev1", 00:10:04.647 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:04.648 "is_configured": true, 00:10:04.648 "data_offset": 0, 00:10:04.648 "data_size": 65536 00:10:04.648 }, 00:10:04.648 { 00:10:04.648 "name": null, 00:10:04.648 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:04.648 "is_configured": false, 00:10:04.648 "data_offset": 0, 00:10:04.648 "data_size": 65536 00:10:04.648 }, 00:10:04.648 { 00:10:04.648 "name": "BaseBdev3", 00:10:04.648 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:04.648 "is_configured": true, 00:10:04.648 "data_offset": 0, 00:10:04.648 "data_size": 65536 00:10:04.648 } 00:10:04.648 ] 00:10:04.648 }' 00:10:04.648 11:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.648 11:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.217 [2024-11-15 11:21:48.060380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.217 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.475 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.475 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.475 "name": "Existed_Raid", 00:10:05.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.475 "strip_size_kb": 64, 00:10:05.475 "state": "configuring", 00:10:05.475 "raid_level": "concat", 00:10:05.475 "superblock": false, 00:10:05.475 "num_base_bdevs": 3, 00:10:05.475 "num_base_bdevs_discovered": 1, 00:10:05.475 "num_base_bdevs_operational": 3, 00:10:05.475 "base_bdevs_list": [ 00:10:05.475 { 00:10:05.475 "name": null, 00:10:05.475 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:05.475 "is_configured": false, 00:10:05.475 "data_offset": 0, 00:10:05.475 "data_size": 65536 00:10:05.475 }, 00:10:05.475 { 00:10:05.475 "name": null, 00:10:05.475 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:05.475 "is_configured": false, 00:10:05.475 "data_offset": 0, 00:10:05.475 "data_size": 65536 00:10:05.475 }, 00:10:05.475 { 00:10:05.475 "name": "BaseBdev3", 00:10:05.475 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:05.475 "is_configured": true, 00:10:05.475 "data_offset": 0, 00:10:05.475 "data_size": 65536 00:10:05.475 } 00:10:05.475 ] 00:10:05.475 }' 00:10:05.475 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.475 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.733 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.733 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.733 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.733 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.992 [2024-11-15 11:21:48.722791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.992 "name": "Existed_Raid", 00:10:05.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.992 "strip_size_kb": 64, 00:10:05.992 "state": "configuring", 00:10:05.992 "raid_level": "concat", 00:10:05.992 "superblock": false, 00:10:05.992 "num_base_bdevs": 3, 00:10:05.992 "num_base_bdevs_discovered": 2, 00:10:05.992 "num_base_bdevs_operational": 3, 00:10:05.992 "base_bdevs_list": [ 00:10:05.992 { 00:10:05.992 "name": null, 00:10:05.992 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:05.992 "is_configured": false, 00:10:05.992 "data_offset": 0, 00:10:05.992 "data_size": 65536 00:10:05.992 }, 00:10:05.992 { 00:10:05.992 "name": "BaseBdev2", 00:10:05.992 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:05.992 "is_configured": true, 00:10:05.992 "data_offset": 0, 00:10:05.992 "data_size": 65536 00:10:05.992 }, 00:10:05.992 { 00:10:05.992 "name": "BaseBdev3", 00:10:05.992 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:05.992 "is_configured": true, 00:10:05.992 "data_offset": 0, 00:10:05.992 "data_size": 65536 00:10:05.992 } 00:10:05.992 ] 00:10:05.992 }' 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.992 11:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.568 [2024-11-15 11:21:49.388411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.568 [2024-11-15 11:21:49.388469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:06.568 [2024-11-15 11:21:49.388485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:06.568 [2024-11-15 11:21:49.388828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:06.568 [2024-11-15 11:21:49.389033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:06.568 [2024-11-15 11:21:49.389050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:06.568 NewBaseBdev 00:10:06.568 [2024-11-15 11:21:49.389428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.568 [ 00:10:06.568 { 00:10:06.568 "name": "NewBaseBdev", 00:10:06.568 "aliases": [ 00:10:06.568 "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28" 00:10:06.568 ], 00:10:06.568 "product_name": "Malloc disk", 00:10:06.568 "block_size": 512, 00:10:06.568 "num_blocks": 65536, 00:10:06.568 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:06.568 "assigned_rate_limits": { 00:10:06.568 "rw_ios_per_sec": 0, 00:10:06.568 "rw_mbytes_per_sec": 0, 00:10:06.568 "r_mbytes_per_sec": 0, 00:10:06.568 "w_mbytes_per_sec": 0 00:10:06.568 }, 00:10:06.568 "claimed": true, 00:10:06.568 "claim_type": "exclusive_write", 00:10:06.568 "zoned": false, 00:10:06.568 "supported_io_types": { 00:10:06.568 "read": true, 00:10:06.568 "write": true, 00:10:06.568 "unmap": true, 00:10:06.568 "flush": true, 00:10:06.568 "reset": true, 00:10:06.568 "nvme_admin": false, 00:10:06.568 "nvme_io": false, 00:10:06.568 "nvme_io_md": false, 00:10:06.568 "write_zeroes": true, 00:10:06.568 "zcopy": true, 00:10:06.568 "get_zone_info": false, 00:10:06.568 "zone_management": false, 00:10:06.568 "zone_append": false, 00:10:06.568 "compare": false, 00:10:06.568 "compare_and_write": false, 00:10:06.568 "abort": true, 00:10:06.568 "seek_hole": false, 00:10:06.568 "seek_data": false, 00:10:06.568 "copy": true, 00:10:06.568 "nvme_iov_md": false 00:10:06.568 }, 00:10:06.568 "memory_domains": [ 00:10:06.568 { 00:10:06.568 "dma_device_id": "system", 00:10:06.568 "dma_device_type": 1 00:10:06.568 }, 00:10:06.568 { 00:10:06.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.568 "dma_device_type": 2 00:10:06.568 } 00:10:06.568 ], 00:10:06.568 "driver_specific": {} 00:10:06.568 } 00:10:06.568 ] 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.568 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.569 "name": "Existed_Raid", 00:10:06.569 "uuid": "065ac55c-d51e-486b-badf-14206ab77ef7", 00:10:06.569 "strip_size_kb": 64, 00:10:06.569 "state": "online", 00:10:06.569 "raid_level": "concat", 00:10:06.569 "superblock": false, 00:10:06.569 "num_base_bdevs": 3, 00:10:06.569 "num_base_bdevs_discovered": 3, 00:10:06.569 "num_base_bdevs_operational": 3, 00:10:06.569 "base_bdevs_list": [ 00:10:06.569 { 00:10:06.569 "name": "NewBaseBdev", 00:10:06.569 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:06.569 "is_configured": true, 00:10:06.569 "data_offset": 0, 00:10:06.569 "data_size": 65536 00:10:06.569 }, 00:10:06.569 { 00:10:06.569 "name": "BaseBdev2", 00:10:06.569 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:06.569 "is_configured": true, 00:10:06.569 "data_offset": 0, 00:10:06.569 "data_size": 65536 00:10:06.569 }, 00:10:06.569 { 00:10:06.569 "name": "BaseBdev3", 00:10:06.569 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:06.569 "is_configured": true, 00:10:06.569 "data_offset": 0, 00:10:06.569 "data_size": 65536 00:10:06.569 } 00:10:06.569 ] 00:10:06.569 }' 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.569 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.137 [2024-11-15 11:21:49.953019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.137 11:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.137 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.137 "name": "Existed_Raid", 00:10:07.137 "aliases": [ 00:10:07.137 "065ac55c-d51e-486b-badf-14206ab77ef7" 00:10:07.137 ], 00:10:07.137 "product_name": "Raid Volume", 00:10:07.137 "block_size": 512, 00:10:07.137 "num_blocks": 196608, 00:10:07.137 "uuid": "065ac55c-d51e-486b-badf-14206ab77ef7", 00:10:07.137 "assigned_rate_limits": { 00:10:07.137 "rw_ios_per_sec": 0, 00:10:07.137 "rw_mbytes_per_sec": 0, 00:10:07.137 "r_mbytes_per_sec": 0, 00:10:07.137 "w_mbytes_per_sec": 0 00:10:07.137 }, 00:10:07.137 "claimed": false, 00:10:07.137 "zoned": false, 00:10:07.137 "supported_io_types": { 00:10:07.137 "read": true, 00:10:07.137 "write": true, 00:10:07.137 "unmap": true, 00:10:07.137 "flush": true, 00:10:07.137 "reset": true, 00:10:07.137 "nvme_admin": false, 00:10:07.137 "nvme_io": false, 00:10:07.137 "nvme_io_md": false, 00:10:07.137 "write_zeroes": true, 00:10:07.137 "zcopy": false, 00:10:07.137 "get_zone_info": false, 00:10:07.137 "zone_management": false, 00:10:07.137 "zone_append": false, 00:10:07.137 "compare": false, 00:10:07.137 "compare_and_write": false, 00:10:07.137 "abort": false, 00:10:07.137 "seek_hole": false, 00:10:07.137 "seek_data": false, 00:10:07.137 "copy": false, 00:10:07.137 "nvme_iov_md": false 00:10:07.137 }, 00:10:07.137 "memory_domains": [ 00:10:07.137 { 00:10:07.137 "dma_device_id": "system", 00:10:07.137 "dma_device_type": 1 00:10:07.137 }, 00:10:07.137 { 00:10:07.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.137 "dma_device_type": 2 00:10:07.137 }, 00:10:07.137 { 00:10:07.137 "dma_device_id": "system", 00:10:07.137 "dma_device_type": 1 00:10:07.137 }, 00:10:07.137 { 00:10:07.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.137 "dma_device_type": 2 00:10:07.137 }, 00:10:07.137 { 00:10:07.137 "dma_device_id": "system", 00:10:07.137 "dma_device_type": 1 00:10:07.137 }, 00:10:07.137 { 00:10:07.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.137 "dma_device_type": 2 00:10:07.137 } 00:10:07.137 ], 00:10:07.137 "driver_specific": { 00:10:07.137 "raid": { 00:10:07.137 "uuid": "065ac55c-d51e-486b-badf-14206ab77ef7", 00:10:07.137 "strip_size_kb": 64, 00:10:07.137 "state": "online", 00:10:07.137 "raid_level": "concat", 00:10:07.137 "superblock": false, 00:10:07.137 "num_base_bdevs": 3, 00:10:07.137 "num_base_bdevs_discovered": 3, 00:10:07.137 "num_base_bdevs_operational": 3, 00:10:07.137 "base_bdevs_list": [ 00:10:07.137 { 00:10:07.137 "name": "NewBaseBdev", 00:10:07.137 "uuid": "9538eb0c-b1ea-4bfc-b2e9-f32dd9d97a28", 00:10:07.137 "is_configured": true, 00:10:07.137 "data_offset": 0, 00:10:07.137 "data_size": 65536 00:10:07.137 }, 00:10:07.137 { 00:10:07.137 "name": "BaseBdev2", 00:10:07.137 "uuid": "53647ead-d71e-4cbc-b43d-b31c32aa0cf6", 00:10:07.137 "is_configured": true, 00:10:07.137 "data_offset": 0, 00:10:07.137 "data_size": 65536 00:10:07.137 }, 00:10:07.137 { 00:10:07.137 "name": "BaseBdev3", 00:10:07.138 "uuid": "a225ec97-ddef-4af5-839e-5fc332397a98", 00:10:07.138 "is_configured": true, 00:10:07.138 "data_offset": 0, 00:10:07.138 "data_size": 65536 00:10:07.138 } 00:10:07.138 ] 00:10:07.138 } 00:10:07.138 } 00:10:07.138 }' 00:10:07.138 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.138 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:07.138 BaseBdev2 00:10:07.138 BaseBdev3' 00:10:07.138 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.398 [2024-11-15 11:21:50.276743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.398 [2024-11-15 11:21:50.276781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.398 [2024-11-15 11:21:50.276883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.398 [2024-11-15 11:21:50.276963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.398 [2024-11-15 11:21:50.276998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65491 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65491 ']' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65491 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65491 00:10:07.398 killing process with pid 65491 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65491' 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65491 00:10:07.398 [2024-11-15 11:21:50.314349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.398 11:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65491 00:10:07.658 [2024-11-15 11:21:50.585385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.043 ************************************ 00:10:09.043 END TEST raid_state_function_test 00:10:09.043 ************************************ 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.043 00:10:09.043 real 0m11.895s 00:10:09.043 user 0m19.669s 00:10:09.043 sys 0m1.632s 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.043 11:21:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:09.043 11:21:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:09.043 11:21:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.043 11:21:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.043 ************************************ 00:10:09.043 START TEST raid_state_function_test_sb 00:10:09.043 ************************************ 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:09.043 Process raid pid: 66130 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66130 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66130' 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66130 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66130 ']' 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:09.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:09.043 11:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.043 [2024-11-15 11:21:51.903897] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:10:09.043 [2024-11-15 11:21:51.904131] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.316 [2024-11-15 11:21:52.094174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.316 [2024-11-15 11:21:52.234560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.575 [2024-11-15 11:21:52.455823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.575 [2024-11-15 11:21:52.455865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.142 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:10.142 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:10.142 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.142 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.142 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.142 [2024-11-15 11:21:52.869552] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.142 [2024-11-15 11:21:52.869653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.142 [2024-11-15 11:21:52.869670] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.142 [2024-11-15 11:21:52.869686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.142 [2024-11-15 11:21:52.869695] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.142 [2024-11-15 11:21:52.869709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.142 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.143 "name": "Existed_Raid", 00:10:10.143 "uuid": "0955ffb0-cd84-498e-8b65-dfbcb1c3212d", 00:10:10.143 "strip_size_kb": 64, 00:10:10.143 "state": "configuring", 00:10:10.143 "raid_level": "concat", 00:10:10.143 "superblock": true, 00:10:10.143 "num_base_bdevs": 3, 00:10:10.143 "num_base_bdevs_discovered": 0, 00:10:10.143 "num_base_bdevs_operational": 3, 00:10:10.143 "base_bdevs_list": [ 00:10:10.143 { 00:10:10.143 "name": "BaseBdev1", 00:10:10.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.143 "is_configured": false, 00:10:10.143 "data_offset": 0, 00:10:10.143 "data_size": 0 00:10:10.143 }, 00:10:10.143 { 00:10:10.143 "name": "BaseBdev2", 00:10:10.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.143 "is_configured": false, 00:10:10.143 "data_offset": 0, 00:10:10.143 "data_size": 0 00:10:10.143 }, 00:10:10.143 { 00:10:10.143 "name": "BaseBdev3", 00:10:10.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.143 "is_configured": false, 00:10:10.143 "data_offset": 0, 00:10:10.143 "data_size": 0 00:10:10.143 } 00:10:10.143 ] 00:10:10.143 }' 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.143 11:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.709 [2024-11-15 11:21:53.393657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.709 [2024-11-15 11:21:53.393706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.709 [2024-11-15 11:21:53.405674] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.709 [2024-11-15 11:21:53.405928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.709 [2024-11-15 11:21:53.406073] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.709 [2024-11-15 11:21:53.406231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.709 [2024-11-15 11:21:53.406350] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.709 [2024-11-15 11:21:53.406481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.709 [2024-11-15 11:21:53.456196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.709 BaseBdev1 00:10:10.709 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.710 [ 00:10:10.710 { 00:10:10.710 "name": "BaseBdev1", 00:10:10.710 "aliases": [ 00:10:10.710 "41de12cd-9467-40ba-8ffb-1e10b0b7ea11" 00:10:10.710 ], 00:10:10.710 "product_name": "Malloc disk", 00:10:10.710 "block_size": 512, 00:10:10.710 "num_blocks": 65536, 00:10:10.710 "uuid": "41de12cd-9467-40ba-8ffb-1e10b0b7ea11", 00:10:10.710 "assigned_rate_limits": { 00:10:10.710 "rw_ios_per_sec": 0, 00:10:10.710 "rw_mbytes_per_sec": 0, 00:10:10.710 "r_mbytes_per_sec": 0, 00:10:10.710 "w_mbytes_per_sec": 0 00:10:10.710 }, 00:10:10.710 "claimed": true, 00:10:10.710 "claim_type": "exclusive_write", 00:10:10.710 "zoned": false, 00:10:10.710 "supported_io_types": { 00:10:10.710 "read": true, 00:10:10.710 "write": true, 00:10:10.710 "unmap": true, 00:10:10.710 "flush": true, 00:10:10.710 "reset": true, 00:10:10.710 "nvme_admin": false, 00:10:10.710 "nvme_io": false, 00:10:10.710 "nvme_io_md": false, 00:10:10.710 "write_zeroes": true, 00:10:10.710 "zcopy": true, 00:10:10.710 "get_zone_info": false, 00:10:10.710 "zone_management": false, 00:10:10.710 "zone_append": false, 00:10:10.710 "compare": false, 00:10:10.710 "compare_and_write": false, 00:10:10.710 "abort": true, 00:10:10.710 "seek_hole": false, 00:10:10.710 "seek_data": false, 00:10:10.710 "copy": true, 00:10:10.710 "nvme_iov_md": false 00:10:10.710 }, 00:10:10.710 "memory_domains": [ 00:10:10.710 { 00:10:10.710 "dma_device_id": "system", 00:10:10.710 "dma_device_type": 1 00:10:10.710 }, 00:10:10.710 { 00:10:10.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.710 "dma_device_type": 2 00:10:10.710 } 00:10:10.710 ], 00:10:10.710 "driver_specific": {} 00:10:10.710 } 00:10:10.710 ] 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.710 "name": "Existed_Raid", 00:10:10.710 "uuid": "a7e58883-58f5-4876-bffb-adaab733c0d5", 00:10:10.710 "strip_size_kb": 64, 00:10:10.710 "state": "configuring", 00:10:10.710 "raid_level": "concat", 00:10:10.710 "superblock": true, 00:10:10.710 "num_base_bdevs": 3, 00:10:10.710 "num_base_bdevs_discovered": 1, 00:10:10.710 "num_base_bdevs_operational": 3, 00:10:10.710 "base_bdevs_list": [ 00:10:10.710 { 00:10:10.710 "name": "BaseBdev1", 00:10:10.710 "uuid": "41de12cd-9467-40ba-8ffb-1e10b0b7ea11", 00:10:10.710 "is_configured": true, 00:10:10.710 "data_offset": 2048, 00:10:10.710 "data_size": 63488 00:10:10.710 }, 00:10:10.710 { 00:10:10.710 "name": "BaseBdev2", 00:10:10.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.710 "is_configured": false, 00:10:10.710 "data_offset": 0, 00:10:10.710 "data_size": 0 00:10:10.710 }, 00:10:10.710 { 00:10:10.710 "name": "BaseBdev3", 00:10:10.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.710 "is_configured": false, 00:10:10.710 "data_offset": 0, 00:10:10.710 "data_size": 0 00:10:10.710 } 00:10:10.710 ] 00:10:10.710 }' 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.710 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.286 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 [2024-11-15 11:21:53.992626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.286 [2024-11-15 11:21:53.992729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:11.286 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:11.287 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.287 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.287 [2024-11-15 11:21:54.000532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.287 [2024-11-15 11:21:54.003190] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.287 [2024-11-15 11:21:54.003244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.287 [2024-11-15 11:21:54.003262] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.287 [2024-11-15 11:21:54.003279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.287 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.287 "name": "Existed_Raid", 00:10:11.287 "uuid": "4f67921b-a078-4476-befa-bf1586b8997b", 00:10:11.287 "strip_size_kb": 64, 00:10:11.288 "state": "configuring", 00:10:11.288 "raid_level": "concat", 00:10:11.288 "superblock": true, 00:10:11.288 "num_base_bdevs": 3, 00:10:11.288 "num_base_bdevs_discovered": 1, 00:10:11.288 "num_base_bdevs_operational": 3, 00:10:11.288 "base_bdevs_list": [ 00:10:11.288 { 00:10:11.288 "name": "BaseBdev1", 00:10:11.288 "uuid": "41de12cd-9467-40ba-8ffb-1e10b0b7ea11", 00:10:11.288 "is_configured": true, 00:10:11.288 "data_offset": 2048, 00:10:11.288 "data_size": 63488 00:10:11.288 }, 00:10:11.288 { 00:10:11.288 "name": "BaseBdev2", 00:10:11.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.288 "is_configured": false, 00:10:11.288 "data_offset": 0, 00:10:11.288 "data_size": 0 00:10:11.288 }, 00:10:11.288 { 00:10:11.288 "name": "BaseBdev3", 00:10:11.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.288 "is_configured": false, 00:10:11.288 "data_offset": 0, 00:10:11.288 "data_size": 0 00:10:11.288 } 00:10:11.288 ] 00:10:11.288 }' 00:10:11.288 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.288 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.862 [2024-11-15 11:21:54.583329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.862 BaseBdev2 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.862 [ 00:10:11.862 { 00:10:11.862 "name": "BaseBdev2", 00:10:11.862 "aliases": [ 00:10:11.862 "be1f75c9-4032-4882-ba3d-47a5eea5ee7e" 00:10:11.862 ], 00:10:11.862 "product_name": "Malloc disk", 00:10:11.862 "block_size": 512, 00:10:11.862 "num_blocks": 65536, 00:10:11.862 "uuid": "be1f75c9-4032-4882-ba3d-47a5eea5ee7e", 00:10:11.862 "assigned_rate_limits": { 00:10:11.862 "rw_ios_per_sec": 0, 00:10:11.862 "rw_mbytes_per_sec": 0, 00:10:11.862 "r_mbytes_per_sec": 0, 00:10:11.862 "w_mbytes_per_sec": 0 00:10:11.862 }, 00:10:11.862 "claimed": true, 00:10:11.862 "claim_type": "exclusive_write", 00:10:11.862 "zoned": false, 00:10:11.862 "supported_io_types": { 00:10:11.862 "read": true, 00:10:11.862 "write": true, 00:10:11.862 "unmap": true, 00:10:11.862 "flush": true, 00:10:11.862 "reset": true, 00:10:11.862 "nvme_admin": false, 00:10:11.862 "nvme_io": false, 00:10:11.862 "nvme_io_md": false, 00:10:11.862 "write_zeroes": true, 00:10:11.862 "zcopy": true, 00:10:11.862 "get_zone_info": false, 00:10:11.862 "zone_management": false, 00:10:11.862 "zone_append": false, 00:10:11.862 "compare": false, 00:10:11.862 "compare_and_write": false, 00:10:11.862 "abort": true, 00:10:11.862 "seek_hole": false, 00:10:11.862 "seek_data": false, 00:10:11.862 "copy": true, 00:10:11.862 "nvme_iov_md": false 00:10:11.862 }, 00:10:11.862 "memory_domains": [ 00:10:11.862 { 00:10:11.862 "dma_device_id": "system", 00:10:11.862 "dma_device_type": 1 00:10:11.862 }, 00:10:11.862 { 00:10:11.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.862 "dma_device_type": 2 00:10:11.862 } 00:10:11.862 ], 00:10:11.862 "driver_specific": {} 00:10:11.862 } 00:10:11.862 ] 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.862 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.863 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.863 "name": "Existed_Raid", 00:10:11.863 "uuid": "4f67921b-a078-4476-befa-bf1586b8997b", 00:10:11.863 "strip_size_kb": 64, 00:10:11.863 "state": "configuring", 00:10:11.863 "raid_level": "concat", 00:10:11.863 "superblock": true, 00:10:11.863 "num_base_bdevs": 3, 00:10:11.863 "num_base_bdevs_discovered": 2, 00:10:11.863 "num_base_bdevs_operational": 3, 00:10:11.863 "base_bdevs_list": [ 00:10:11.863 { 00:10:11.863 "name": "BaseBdev1", 00:10:11.863 "uuid": "41de12cd-9467-40ba-8ffb-1e10b0b7ea11", 00:10:11.863 "is_configured": true, 00:10:11.863 "data_offset": 2048, 00:10:11.863 "data_size": 63488 00:10:11.863 }, 00:10:11.863 { 00:10:11.863 "name": "BaseBdev2", 00:10:11.863 "uuid": "be1f75c9-4032-4882-ba3d-47a5eea5ee7e", 00:10:11.863 "is_configured": true, 00:10:11.863 "data_offset": 2048, 00:10:11.863 "data_size": 63488 00:10:11.863 }, 00:10:11.863 { 00:10:11.863 "name": "BaseBdev3", 00:10:11.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.863 "is_configured": false, 00:10:11.863 "data_offset": 0, 00:10:11.863 "data_size": 0 00:10:11.863 } 00:10:11.863 ] 00:10:11.863 }' 00:10:11.863 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.863 11:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 [2024-11-15 11:21:55.161137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.429 [2024-11-15 11:21:55.161440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:12.429 [2024-11-15 11:21:55.161478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:12.429 BaseBdev3 00:10:12.429 [2024-11-15 11:21:55.162055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:12.429 [2024-11-15 11:21:55.162362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:12.429 [2024-11-15 11:21:55.162402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 [2024-11-15 11:21:55.162604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 [ 00:10:12.429 { 00:10:12.429 "name": "BaseBdev3", 00:10:12.429 "aliases": [ 00:10:12.429 "5fde1018-e59f-40c4-b5f4-ab734d940a2d" 00:10:12.429 ], 00:10:12.429 "product_name": "Malloc disk", 00:10:12.429 "block_size": 512, 00:10:12.429 "num_blocks": 65536, 00:10:12.429 "uuid": "5fde1018-e59f-40c4-b5f4-ab734d940a2d", 00:10:12.429 "assigned_rate_limits": { 00:10:12.429 "rw_ios_per_sec": 0, 00:10:12.429 "rw_mbytes_per_sec": 0, 00:10:12.429 "r_mbytes_per_sec": 0, 00:10:12.429 "w_mbytes_per_sec": 0 00:10:12.429 }, 00:10:12.429 "claimed": true, 00:10:12.429 "claim_type": "exclusive_write", 00:10:12.429 "zoned": false, 00:10:12.429 "supported_io_types": { 00:10:12.429 "read": true, 00:10:12.429 "write": true, 00:10:12.429 "unmap": true, 00:10:12.429 "flush": true, 00:10:12.429 "reset": true, 00:10:12.429 "nvme_admin": false, 00:10:12.429 "nvme_io": false, 00:10:12.429 "nvme_io_md": false, 00:10:12.429 "write_zeroes": true, 00:10:12.429 "zcopy": true, 00:10:12.429 "get_zone_info": false, 00:10:12.429 "zone_management": false, 00:10:12.429 "zone_append": false, 00:10:12.429 "compare": false, 00:10:12.429 "compare_and_write": false, 00:10:12.429 "abort": true, 00:10:12.429 "seek_hole": false, 00:10:12.429 "seek_data": false, 00:10:12.429 "copy": true, 00:10:12.429 "nvme_iov_md": false 00:10:12.429 }, 00:10:12.429 "memory_domains": [ 00:10:12.429 { 00:10:12.429 "dma_device_id": "system", 00:10:12.429 "dma_device_type": 1 00:10:12.429 }, 00:10:12.429 { 00:10:12.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.429 "dma_device_type": 2 00:10:12.429 } 00:10:12.430 ], 00:10:12.430 "driver_specific": {} 00:10:12.430 } 00:10:12.430 ] 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.430 "name": "Existed_Raid", 00:10:12.430 "uuid": "4f67921b-a078-4476-befa-bf1586b8997b", 00:10:12.430 "strip_size_kb": 64, 00:10:12.430 "state": "online", 00:10:12.430 "raid_level": "concat", 00:10:12.430 "superblock": true, 00:10:12.430 "num_base_bdevs": 3, 00:10:12.430 "num_base_bdevs_discovered": 3, 00:10:12.430 "num_base_bdevs_operational": 3, 00:10:12.430 "base_bdevs_list": [ 00:10:12.430 { 00:10:12.430 "name": "BaseBdev1", 00:10:12.430 "uuid": "41de12cd-9467-40ba-8ffb-1e10b0b7ea11", 00:10:12.430 "is_configured": true, 00:10:12.430 "data_offset": 2048, 00:10:12.430 "data_size": 63488 00:10:12.430 }, 00:10:12.430 { 00:10:12.430 "name": "BaseBdev2", 00:10:12.430 "uuid": "be1f75c9-4032-4882-ba3d-47a5eea5ee7e", 00:10:12.430 "is_configured": true, 00:10:12.430 "data_offset": 2048, 00:10:12.430 "data_size": 63488 00:10:12.430 }, 00:10:12.430 { 00:10:12.430 "name": "BaseBdev3", 00:10:12.430 "uuid": "5fde1018-e59f-40c4-b5f4-ab734d940a2d", 00:10:12.430 "is_configured": true, 00:10:12.430 "data_offset": 2048, 00:10:12.430 "data_size": 63488 00:10:12.430 } 00:10:12.430 ] 00:10:12.430 }' 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.430 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.997 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.998 [2024-11-15 11:21:55.721877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.998 "name": "Existed_Raid", 00:10:12.998 "aliases": [ 00:10:12.998 "4f67921b-a078-4476-befa-bf1586b8997b" 00:10:12.998 ], 00:10:12.998 "product_name": "Raid Volume", 00:10:12.998 "block_size": 512, 00:10:12.998 "num_blocks": 190464, 00:10:12.998 "uuid": "4f67921b-a078-4476-befa-bf1586b8997b", 00:10:12.998 "assigned_rate_limits": { 00:10:12.998 "rw_ios_per_sec": 0, 00:10:12.998 "rw_mbytes_per_sec": 0, 00:10:12.998 "r_mbytes_per_sec": 0, 00:10:12.998 "w_mbytes_per_sec": 0 00:10:12.998 }, 00:10:12.998 "claimed": false, 00:10:12.998 "zoned": false, 00:10:12.998 "supported_io_types": { 00:10:12.998 "read": true, 00:10:12.998 "write": true, 00:10:12.998 "unmap": true, 00:10:12.998 "flush": true, 00:10:12.998 "reset": true, 00:10:12.998 "nvme_admin": false, 00:10:12.998 "nvme_io": false, 00:10:12.998 "nvme_io_md": false, 00:10:12.998 "write_zeroes": true, 00:10:12.998 "zcopy": false, 00:10:12.998 "get_zone_info": false, 00:10:12.998 "zone_management": false, 00:10:12.998 "zone_append": false, 00:10:12.998 "compare": false, 00:10:12.998 "compare_and_write": false, 00:10:12.998 "abort": false, 00:10:12.998 "seek_hole": false, 00:10:12.998 "seek_data": false, 00:10:12.998 "copy": false, 00:10:12.998 "nvme_iov_md": false 00:10:12.998 }, 00:10:12.998 "memory_domains": [ 00:10:12.998 { 00:10:12.998 "dma_device_id": "system", 00:10:12.998 "dma_device_type": 1 00:10:12.998 }, 00:10:12.998 { 00:10:12.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.998 "dma_device_type": 2 00:10:12.998 }, 00:10:12.998 { 00:10:12.998 "dma_device_id": "system", 00:10:12.998 "dma_device_type": 1 00:10:12.998 }, 00:10:12.998 { 00:10:12.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.998 "dma_device_type": 2 00:10:12.998 }, 00:10:12.998 { 00:10:12.998 "dma_device_id": "system", 00:10:12.998 "dma_device_type": 1 00:10:12.998 }, 00:10:12.998 { 00:10:12.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.998 "dma_device_type": 2 00:10:12.998 } 00:10:12.998 ], 00:10:12.998 "driver_specific": { 00:10:12.998 "raid": { 00:10:12.998 "uuid": "4f67921b-a078-4476-befa-bf1586b8997b", 00:10:12.998 "strip_size_kb": 64, 00:10:12.998 "state": "online", 00:10:12.998 "raid_level": "concat", 00:10:12.998 "superblock": true, 00:10:12.998 "num_base_bdevs": 3, 00:10:12.998 "num_base_bdevs_discovered": 3, 00:10:12.998 "num_base_bdevs_operational": 3, 00:10:12.998 "base_bdevs_list": [ 00:10:12.998 { 00:10:12.998 "name": "BaseBdev1", 00:10:12.998 "uuid": "41de12cd-9467-40ba-8ffb-1e10b0b7ea11", 00:10:12.998 "is_configured": true, 00:10:12.998 "data_offset": 2048, 00:10:12.998 "data_size": 63488 00:10:12.998 }, 00:10:12.998 { 00:10:12.998 "name": "BaseBdev2", 00:10:12.998 "uuid": "be1f75c9-4032-4882-ba3d-47a5eea5ee7e", 00:10:12.998 "is_configured": true, 00:10:12.998 "data_offset": 2048, 00:10:12.998 "data_size": 63488 00:10:12.998 }, 00:10:12.998 { 00:10:12.998 "name": "BaseBdev3", 00:10:12.998 "uuid": "5fde1018-e59f-40c4-b5f4-ab734d940a2d", 00:10:12.998 "is_configured": true, 00:10:12.998 "data_offset": 2048, 00:10:12.998 "data_size": 63488 00:10:12.998 } 00:10:12.998 ] 00:10:12.998 } 00:10:12.998 } 00:10:12.998 }' 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:12.998 BaseBdev2 00:10:12.998 BaseBdev3' 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.998 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.257 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.257 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.258 11:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.258 [2024-11-15 11:21:56.049742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.258 [2024-11-15 11:21:56.049781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.258 [2024-11-15 11:21:56.049866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.258 "name": "Existed_Raid", 00:10:13.258 "uuid": "4f67921b-a078-4476-befa-bf1586b8997b", 00:10:13.258 "strip_size_kb": 64, 00:10:13.258 "state": "offline", 00:10:13.258 "raid_level": "concat", 00:10:13.258 "superblock": true, 00:10:13.258 "num_base_bdevs": 3, 00:10:13.258 "num_base_bdevs_discovered": 2, 00:10:13.258 "num_base_bdevs_operational": 2, 00:10:13.258 "base_bdevs_list": [ 00:10:13.258 { 00:10:13.258 "name": null, 00:10:13.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.258 "is_configured": false, 00:10:13.258 "data_offset": 0, 00:10:13.258 "data_size": 63488 00:10:13.258 }, 00:10:13.258 { 00:10:13.258 "name": "BaseBdev2", 00:10:13.258 "uuid": "be1f75c9-4032-4882-ba3d-47a5eea5ee7e", 00:10:13.258 "is_configured": true, 00:10:13.258 "data_offset": 2048, 00:10:13.258 "data_size": 63488 00:10:13.258 }, 00:10:13.258 { 00:10:13.258 "name": "BaseBdev3", 00:10:13.258 "uuid": "5fde1018-e59f-40c4-b5f4-ab734d940a2d", 00:10:13.258 "is_configured": true, 00:10:13.258 "data_offset": 2048, 00:10:13.258 "data_size": 63488 00:10:13.258 } 00:10:13.258 ] 00:10:13.258 }' 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.258 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.825 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:13.825 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.825 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.825 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:13.826 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.826 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.826 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.826 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:13.826 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:13.826 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:13.826 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.826 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.826 [2024-11-15 11:21:56.720978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.085 [2024-11-15 11:21:56.866539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.085 [2024-11-15 11:21:56.866634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.085 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.085 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:14.085 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:14.085 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:14.085 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:14.085 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.085 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.085 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.085 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.344 BaseBdev2 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.344 [ 00:10:14.344 { 00:10:14.344 "name": "BaseBdev2", 00:10:14.344 "aliases": [ 00:10:14.344 "ed1ba279-0e50-40cc-8336-1d076cbd9c4c" 00:10:14.344 ], 00:10:14.344 "product_name": "Malloc disk", 00:10:14.344 "block_size": 512, 00:10:14.344 "num_blocks": 65536, 00:10:14.344 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:14.344 "assigned_rate_limits": { 00:10:14.344 "rw_ios_per_sec": 0, 00:10:14.344 "rw_mbytes_per_sec": 0, 00:10:14.344 "r_mbytes_per_sec": 0, 00:10:14.344 "w_mbytes_per_sec": 0 00:10:14.344 }, 00:10:14.344 "claimed": false, 00:10:14.344 "zoned": false, 00:10:14.344 "supported_io_types": { 00:10:14.344 "read": true, 00:10:14.344 "write": true, 00:10:14.344 "unmap": true, 00:10:14.344 "flush": true, 00:10:14.344 "reset": true, 00:10:14.344 "nvme_admin": false, 00:10:14.344 "nvme_io": false, 00:10:14.344 "nvme_io_md": false, 00:10:14.344 "write_zeroes": true, 00:10:14.344 "zcopy": true, 00:10:14.344 "get_zone_info": false, 00:10:14.344 "zone_management": false, 00:10:14.344 "zone_append": false, 00:10:14.344 "compare": false, 00:10:14.344 "compare_and_write": false, 00:10:14.344 "abort": true, 00:10:14.344 "seek_hole": false, 00:10:14.344 "seek_data": false, 00:10:14.344 "copy": true, 00:10:14.344 "nvme_iov_md": false 00:10:14.344 }, 00:10:14.344 "memory_domains": [ 00:10:14.344 { 00:10:14.344 "dma_device_id": "system", 00:10:14.344 "dma_device_type": 1 00:10:14.344 }, 00:10:14.344 { 00:10:14.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.344 "dma_device_type": 2 00:10:14.344 } 00:10:14.344 ], 00:10:14.344 "driver_specific": {} 00:10:14.344 } 00:10:14.344 ] 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.344 BaseBdev3 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.344 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.344 [ 00:10:14.344 { 00:10:14.344 "name": "BaseBdev3", 00:10:14.344 "aliases": [ 00:10:14.344 "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec" 00:10:14.344 ], 00:10:14.344 "product_name": "Malloc disk", 00:10:14.344 "block_size": 512, 00:10:14.344 "num_blocks": 65536, 00:10:14.344 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:14.344 "assigned_rate_limits": { 00:10:14.344 "rw_ios_per_sec": 0, 00:10:14.344 "rw_mbytes_per_sec": 0, 00:10:14.344 "r_mbytes_per_sec": 0, 00:10:14.344 "w_mbytes_per_sec": 0 00:10:14.344 }, 00:10:14.344 "claimed": false, 00:10:14.344 "zoned": false, 00:10:14.344 "supported_io_types": { 00:10:14.344 "read": true, 00:10:14.344 "write": true, 00:10:14.344 "unmap": true, 00:10:14.344 "flush": true, 00:10:14.344 "reset": true, 00:10:14.344 "nvme_admin": false, 00:10:14.344 "nvme_io": false, 00:10:14.344 "nvme_io_md": false, 00:10:14.344 "write_zeroes": true, 00:10:14.344 "zcopy": true, 00:10:14.344 "get_zone_info": false, 00:10:14.344 "zone_management": false, 00:10:14.344 "zone_append": false, 00:10:14.344 "compare": false, 00:10:14.344 "compare_and_write": false, 00:10:14.344 "abort": true, 00:10:14.344 "seek_hole": false, 00:10:14.344 "seek_data": false, 00:10:14.344 "copy": true, 00:10:14.345 "nvme_iov_md": false 00:10:14.345 }, 00:10:14.345 "memory_domains": [ 00:10:14.345 { 00:10:14.345 "dma_device_id": "system", 00:10:14.345 "dma_device_type": 1 00:10:14.345 }, 00:10:14.345 { 00:10:14.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.345 "dma_device_type": 2 00:10:14.345 } 00:10:14.345 ], 00:10:14.345 "driver_specific": {} 00:10:14.345 } 00:10:14.345 ] 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.345 [2024-11-15 11:21:57.163727] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.345 [2024-11-15 11:21:57.163853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.345 [2024-11-15 11:21:57.163892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.345 [2024-11-15 11:21:57.166496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.345 "name": "Existed_Raid", 00:10:14.345 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:14.345 "strip_size_kb": 64, 00:10:14.345 "state": "configuring", 00:10:14.345 "raid_level": "concat", 00:10:14.345 "superblock": true, 00:10:14.345 "num_base_bdevs": 3, 00:10:14.345 "num_base_bdevs_discovered": 2, 00:10:14.345 "num_base_bdevs_operational": 3, 00:10:14.345 "base_bdevs_list": [ 00:10:14.345 { 00:10:14.345 "name": "BaseBdev1", 00:10:14.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.345 "is_configured": false, 00:10:14.345 "data_offset": 0, 00:10:14.345 "data_size": 0 00:10:14.345 }, 00:10:14.345 { 00:10:14.345 "name": "BaseBdev2", 00:10:14.345 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:14.345 "is_configured": true, 00:10:14.345 "data_offset": 2048, 00:10:14.345 "data_size": 63488 00:10:14.345 }, 00:10:14.345 { 00:10:14.345 "name": "BaseBdev3", 00:10:14.345 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:14.345 "is_configured": true, 00:10:14.345 "data_offset": 2048, 00:10:14.345 "data_size": 63488 00:10:14.345 } 00:10:14.345 ] 00:10:14.345 }' 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.345 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.914 [2024-11-15 11:21:57.695908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.914 "name": "Existed_Raid", 00:10:14.914 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:14.914 "strip_size_kb": 64, 00:10:14.914 "state": "configuring", 00:10:14.914 "raid_level": "concat", 00:10:14.914 "superblock": true, 00:10:14.914 "num_base_bdevs": 3, 00:10:14.914 "num_base_bdevs_discovered": 1, 00:10:14.914 "num_base_bdevs_operational": 3, 00:10:14.914 "base_bdevs_list": [ 00:10:14.914 { 00:10:14.914 "name": "BaseBdev1", 00:10:14.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.914 "is_configured": false, 00:10:14.914 "data_offset": 0, 00:10:14.914 "data_size": 0 00:10:14.914 }, 00:10:14.914 { 00:10:14.914 "name": null, 00:10:14.914 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:14.914 "is_configured": false, 00:10:14.914 "data_offset": 0, 00:10:14.914 "data_size": 63488 00:10:14.914 }, 00:10:14.914 { 00:10:14.914 "name": "BaseBdev3", 00:10:14.914 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:14.914 "is_configured": true, 00:10:14.914 "data_offset": 2048, 00:10:14.914 "data_size": 63488 00:10:14.914 } 00:10:14.914 ] 00:10:14.914 }' 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.914 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.514 [2024-11-15 11:21:58.310734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.514 BaseBdev1 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.514 [ 00:10:15.514 { 00:10:15.514 "name": "BaseBdev1", 00:10:15.514 "aliases": [ 00:10:15.514 "807654ac-db7c-4651-a500-2d8c56a9bd15" 00:10:15.514 ], 00:10:15.514 "product_name": "Malloc disk", 00:10:15.514 "block_size": 512, 00:10:15.514 "num_blocks": 65536, 00:10:15.514 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:15.514 "assigned_rate_limits": { 00:10:15.514 "rw_ios_per_sec": 0, 00:10:15.514 "rw_mbytes_per_sec": 0, 00:10:15.514 "r_mbytes_per_sec": 0, 00:10:15.514 "w_mbytes_per_sec": 0 00:10:15.514 }, 00:10:15.514 "claimed": true, 00:10:15.514 "claim_type": "exclusive_write", 00:10:15.514 "zoned": false, 00:10:15.514 "supported_io_types": { 00:10:15.514 "read": true, 00:10:15.514 "write": true, 00:10:15.514 "unmap": true, 00:10:15.514 "flush": true, 00:10:15.514 "reset": true, 00:10:15.514 "nvme_admin": false, 00:10:15.514 "nvme_io": false, 00:10:15.514 "nvme_io_md": false, 00:10:15.514 "write_zeroes": true, 00:10:15.514 "zcopy": true, 00:10:15.514 "get_zone_info": false, 00:10:15.514 "zone_management": false, 00:10:15.514 "zone_append": false, 00:10:15.514 "compare": false, 00:10:15.514 "compare_and_write": false, 00:10:15.514 "abort": true, 00:10:15.514 "seek_hole": false, 00:10:15.514 "seek_data": false, 00:10:15.514 "copy": true, 00:10:15.514 "nvme_iov_md": false 00:10:15.514 }, 00:10:15.514 "memory_domains": [ 00:10:15.514 { 00:10:15.514 "dma_device_id": "system", 00:10:15.514 "dma_device_type": 1 00:10:15.514 }, 00:10:15.514 { 00:10:15.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.514 "dma_device_type": 2 00:10:15.514 } 00:10:15.514 ], 00:10:15.514 "driver_specific": {} 00:10:15.514 } 00:10:15.514 ] 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.514 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.514 "name": "Existed_Raid", 00:10:15.514 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:15.514 "strip_size_kb": 64, 00:10:15.514 "state": "configuring", 00:10:15.514 "raid_level": "concat", 00:10:15.514 "superblock": true, 00:10:15.514 "num_base_bdevs": 3, 00:10:15.514 "num_base_bdevs_discovered": 2, 00:10:15.514 "num_base_bdevs_operational": 3, 00:10:15.514 "base_bdevs_list": [ 00:10:15.514 { 00:10:15.514 "name": "BaseBdev1", 00:10:15.514 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:15.514 "is_configured": true, 00:10:15.514 "data_offset": 2048, 00:10:15.514 "data_size": 63488 00:10:15.514 }, 00:10:15.514 { 00:10:15.514 "name": null, 00:10:15.514 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:15.514 "is_configured": false, 00:10:15.514 "data_offset": 0, 00:10:15.515 "data_size": 63488 00:10:15.515 }, 00:10:15.515 { 00:10:15.515 "name": "BaseBdev3", 00:10:15.515 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:15.515 "is_configured": true, 00:10:15.515 "data_offset": 2048, 00:10:15.515 "data_size": 63488 00:10:15.515 } 00:10:15.515 ] 00:10:15.515 }' 00:10:15.515 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.515 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.082 [2024-11-15 11:21:58.911026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.082 "name": "Existed_Raid", 00:10:16.082 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:16.082 "strip_size_kb": 64, 00:10:16.082 "state": "configuring", 00:10:16.082 "raid_level": "concat", 00:10:16.082 "superblock": true, 00:10:16.082 "num_base_bdevs": 3, 00:10:16.082 "num_base_bdevs_discovered": 1, 00:10:16.082 "num_base_bdevs_operational": 3, 00:10:16.082 "base_bdevs_list": [ 00:10:16.082 { 00:10:16.082 "name": "BaseBdev1", 00:10:16.082 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:16.082 "is_configured": true, 00:10:16.082 "data_offset": 2048, 00:10:16.082 "data_size": 63488 00:10:16.082 }, 00:10:16.082 { 00:10:16.082 "name": null, 00:10:16.082 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:16.082 "is_configured": false, 00:10:16.082 "data_offset": 0, 00:10:16.082 "data_size": 63488 00:10:16.082 }, 00:10:16.082 { 00:10:16.082 "name": null, 00:10:16.082 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:16.082 "is_configured": false, 00:10:16.082 "data_offset": 0, 00:10:16.082 "data_size": 63488 00:10:16.082 } 00:10:16.082 ] 00:10:16.082 }' 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.082 11:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.649 [2024-11-15 11:21:59.531252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.649 "name": "Existed_Raid", 00:10:16.649 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:16.649 "strip_size_kb": 64, 00:10:16.649 "state": "configuring", 00:10:16.649 "raid_level": "concat", 00:10:16.649 "superblock": true, 00:10:16.649 "num_base_bdevs": 3, 00:10:16.649 "num_base_bdevs_discovered": 2, 00:10:16.649 "num_base_bdevs_operational": 3, 00:10:16.649 "base_bdevs_list": [ 00:10:16.649 { 00:10:16.649 "name": "BaseBdev1", 00:10:16.649 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:16.649 "is_configured": true, 00:10:16.649 "data_offset": 2048, 00:10:16.649 "data_size": 63488 00:10:16.649 }, 00:10:16.649 { 00:10:16.649 "name": null, 00:10:16.649 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:16.649 "is_configured": false, 00:10:16.649 "data_offset": 0, 00:10:16.649 "data_size": 63488 00:10:16.649 }, 00:10:16.649 { 00:10:16.649 "name": "BaseBdev3", 00:10:16.649 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:16.649 "is_configured": true, 00:10:16.649 "data_offset": 2048, 00:10:16.649 "data_size": 63488 00:10:16.649 } 00:10:16.649 ] 00:10:16.649 }' 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.649 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.216 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.216 [2024-11-15 11:22:00.091444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.475 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.476 "name": "Existed_Raid", 00:10:17.476 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:17.476 "strip_size_kb": 64, 00:10:17.476 "state": "configuring", 00:10:17.476 "raid_level": "concat", 00:10:17.476 "superblock": true, 00:10:17.476 "num_base_bdevs": 3, 00:10:17.476 "num_base_bdevs_discovered": 1, 00:10:17.476 "num_base_bdevs_operational": 3, 00:10:17.476 "base_bdevs_list": [ 00:10:17.476 { 00:10:17.476 "name": null, 00:10:17.476 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:17.476 "is_configured": false, 00:10:17.476 "data_offset": 0, 00:10:17.476 "data_size": 63488 00:10:17.476 }, 00:10:17.476 { 00:10:17.476 "name": null, 00:10:17.476 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:17.476 "is_configured": false, 00:10:17.476 "data_offset": 0, 00:10:17.476 "data_size": 63488 00:10:17.476 }, 00:10:17.476 { 00:10:17.476 "name": "BaseBdev3", 00:10:17.476 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:17.476 "is_configured": true, 00:10:17.476 "data_offset": 2048, 00:10:17.476 "data_size": 63488 00:10:17.476 } 00:10:17.476 ] 00:10:17.476 }' 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.476 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.044 [2024-11-15 11:22:00.737217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.044 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.044 "name": "Existed_Raid", 00:10:18.044 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:18.044 "strip_size_kb": 64, 00:10:18.044 "state": "configuring", 00:10:18.045 "raid_level": "concat", 00:10:18.045 "superblock": true, 00:10:18.045 "num_base_bdevs": 3, 00:10:18.045 "num_base_bdevs_discovered": 2, 00:10:18.045 "num_base_bdevs_operational": 3, 00:10:18.045 "base_bdevs_list": [ 00:10:18.045 { 00:10:18.045 "name": null, 00:10:18.045 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:18.045 "is_configured": false, 00:10:18.045 "data_offset": 0, 00:10:18.045 "data_size": 63488 00:10:18.045 }, 00:10:18.045 { 00:10:18.045 "name": "BaseBdev2", 00:10:18.045 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:18.045 "is_configured": true, 00:10:18.045 "data_offset": 2048, 00:10:18.045 "data_size": 63488 00:10:18.045 }, 00:10:18.045 { 00:10:18.045 "name": "BaseBdev3", 00:10:18.045 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:18.045 "is_configured": true, 00:10:18.045 "data_offset": 2048, 00:10:18.045 "data_size": 63488 00:10:18.045 } 00:10:18.045 ] 00:10:18.045 }' 00:10:18.045 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.045 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.303 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.303 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.303 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.304 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.304 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 807654ac-db7c-4651-a500-2d8c56a9bd15 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.563 [2024-11-15 11:22:01.367875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:18.563 [2024-11-15 11:22:01.368136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:18.563 [2024-11-15 11:22:01.368159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:18.563 [2024-11-15 11:22:01.368515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:18.563 NewBaseBdev 00:10:18.563 [2024-11-15 11:22:01.368751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:18.563 [2024-11-15 11:22:01.368767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:18.563 [2024-11-15 11:22:01.368934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.563 [ 00:10:18.563 { 00:10:18.563 "name": "NewBaseBdev", 00:10:18.563 "aliases": [ 00:10:18.563 "807654ac-db7c-4651-a500-2d8c56a9bd15" 00:10:18.563 ], 00:10:18.563 "product_name": "Malloc disk", 00:10:18.563 "block_size": 512, 00:10:18.563 "num_blocks": 65536, 00:10:18.563 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:18.563 "assigned_rate_limits": { 00:10:18.563 "rw_ios_per_sec": 0, 00:10:18.563 "rw_mbytes_per_sec": 0, 00:10:18.563 "r_mbytes_per_sec": 0, 00:10:18.563 "w_mbytes_per_sec": 0 00:10:18.563 }, 00:10:18.563 "claimed": true, 00:10:18.563 "claim_type": "exclusive_write", 00:10:18.563 "zoned": false, 00:10:18.563 "supported_io_types": { 00:10:18.563 "read": true, 00:10:18.563 "write": true, 00:10:18.563 "unmap": true, 00:10:18.563 "flush": true, 00:10:18.563 "reset": true, 00:10:18.563 "nvme_admin": false, 00:10:18.563 "nvme_io": false, 00:10:18.563 "nvme_io_md": false, 00:10:18.563 "write_zeroes": true, 00:10:18.563 "zcopy": true, 00:10:18.563 "get_zone_info": false, 00:10:18.563 "zone_management": false, 00:10:18.563 "zone_append": false, 00:10:18.563 "compare": false, 00:10:18.563 "compare_and_write": false, 00:10:18.563 "abort": true, 00:10:18.563 "seek_hole": false, 00:10:18.563 "seek_data": false, 00:10:18.563 "copy": true, 00:10:18.563 "nvme_iov_md": false 00:10:18.563 }, 00:10:18.563 "memory_domains": [ 00:10:18.563 { 00:10:18.563 "dma_device_id": "system", 00:10:18.563 "dma_device_type": 1 00:10:18.563 }, 00:10:18.563 { 00:10:18.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.563 "dma_device_type": 2 00:10:18.563 } 00:10:18.563 ], 00:10:18.563 "driver_specific": {} 00:10:18.563 } 00:10:18.563 ] 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.563 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.564 "name": "Existed_Raid", 00:10:18.564 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:18.564 "strip_size_kb": 64, 00:10:18.564 "state": "online", 00:10:18.564 "raid_level": "concat", 00:10:18.564 "superblock": true, 00:10:18.564 "num_base_bdevs": 3, 00:10:18.564 "num_base_bdevs_discovered": 3, 00:10:18.564 "num_base_bdevs_operational": 3, 00:10:18.564 "base_bdevs_list": [ 00:10:18.564 { 00:10:18.564 "name": "NewBaseBdev", 00:10:18.564 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:18.564 "is_configured": true, 00:10:18.564 "data_offset": 2048, 00:10:18.564 "data_size": 63488 00:10:18.564 }, 00:10:18.564 { 00:10:18.564 "name": "BaseBdev2", 00:10:18.564 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:18.564 "is_configured": true, 00:10:18.564 "data_offset": 2048, 00:10:18.564 "data_size": 63488 00:10:18.564 }, 00:10:18.564 { 00:10:18.564 "name": "BaseBdev3", 00:10:18.564 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:18.564 "is_configured": true, 00:10:18.564 "data_offset": 2048, 00:10:18.564 "data_size": 63488 00:10:18.564 } 00:10:18.564 ] 00:10:18.564 }' 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.564 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.130 [2024-11-15 11:22:01.920568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.130 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.130 "name": "Existed_Raid", 00:10:19.130 "aliases": [ 00:10:19.130 "e7323c82-7fb0-469f-a75f-a0fb0d7af29f" 00:10:19.130 ], 00:10:19.130 "product_name": "Raid Volume", 00:10:19.130 "block_size": 512, 00:10:19.130 "num_blocks": 190464, 00:10:19.130 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:19.130 "assigned_rate_limits": { 00:10:19.130 "rw_ios_per_sec": 0, 00:10:19.130 "rw_mbytes_per_sec": 0, 00:10:19.130 "r_mbytes_per_sec": 0, 00:10:19.130 "w_mbytes_per_sec": 0 00:10:19.130 }, 00:10:19.130 "claimed": false, 00:10:19.130 "zoned": false, 00:10:19.130 "supported_io_types": { 00:10:19.130 "read": true, 00:10:19.130 "write": true, 00:10:19.130 "unmap": true, 00:10:19.130 "flush": true, 00:10:19.130 "reset": true, 00:10:19.130 "nvme_admin": false, 00:10:19.130 "nvme_io": false, 00:10:19.130 "nvme_io_md": false, 00:10:19.130 "write_zeroes": true, 00:10:19.130 "zcopy": false, 00:10:19.130 "get_zone_info": false, 00:10:19.130 "zone_management": false, 00:10:19.130 "zone_append": false, 00:10:19.130 "compare": false, 00:10:19.130 "compare_and_write": false, 00:10:19.130 "abort": false, 00:10:19.131 "seek_hole": false, 00:10:19.131 "seek_data": false, 00:10:19.131 "copy": false, 00:10:19.131 "nvme_iov_md": false 00:10:19.131 }, 00:10:19.131 "memory_domains": [ 00:10:19.131 { 00:10:19.131 "dma_device_id": "system", 00:10:19.131 "dma_device_type": 1 00:10:19.131 }, 00:10:19.131 { 00:10:19.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.131 "dma_device_type": 2 00:10:19.131 }, 00:10:19.131 { 00:10:19.131 "dma_device_id": "system", 00:10:19.131 "dma_device_type": 1 00:10:19.131 }, 00:10:19.131 { 00:10:19.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.131 "dma_device_type": 2 00:10:19.131 }, 00:10:19.131 { 00:10:19.131 "dma_device_id": "system", 00:10:19.131 "dma_device_type": 1 00:10:19.131 }, 00:10:19.131 { 00:10:19.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.131 "dma_device_type": 2 00:10:19.131 } 00:10:19.131 ], 00:10:19.131 "driver_specific": { 00:10:19.131 "raid": { 00:10:19.131 "uuid": "e7323c82-7fb0-469f-a75f-a0fb0d7af29f", 00:10:19.131 "strip_size_kb": 64, 00:10:19.131 "state": "online", 00:10:19.131 "raid_level": "concat", 00:10:19.131 "superblock": true, 00:10:19.131 "num_base_bdevs": 3, 00:10:19.131 "num_base_bdevs_discovered": 3, 00:10:19.131 "num_base_bdevs_operational": 3, 00:10:19.131 "base_bdevs_list": [ 00:10:19.131 { 00:10:19.131 "name": "NewBaseBdev", 00:10:19.131 "uuid": "807654ac-db7c-4651-a500-2d8c56a9bd15", 00:10:19.131 "is_configured": true, 00:10:19.131 "data_offset": 2048, 00:10:19.131 "data_size": 63488 00:10:19.131 }, 00:10:19.131 { 00:10:19.131 "name": "BaseBdev2", 00:10:19.131 "uuid": "ed1ba279-0e50-40cc-8336-1d076cbd9c4c", 00:10:19.131 "is_configured": true, 00:10:19.131 "data_offset": 2048, 00:10:19.131 "data_size": 63488 00:10:19.131 }, 00:10:19.131 { 00:10:19.131 "name": "BaseBdev3", 00:10:19.131 "uuid": "ac4ef583-0d1d-4f91-a5dd-e3e5e5f975ec", 00:10:19.131 "is_configured": true, 00:10:19.131 "data_offset": 2048, 00:10:19.131 "data_size": 63488 00:10:19.131 } 00:10:19.131 ] 00:10:19.131 } 00:10:19.131 } 00:10:19.131 }' 00:10:19.131 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.131 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:19.131 BaseBdev2 00:10:19.131 BaseBdev3' 00:10:19.131 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.131 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.131 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.131 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:19.131 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.131 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.131 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.390 [2024-11-15 11:22:02.252277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.390 [2024-11-15 11:22:02.252315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.390 [2024-11-15 11:22:02.252430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.390 [2024-11-15 11:22:02.252510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.390 [2024-11-15 11:22:02.252531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66130 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66130 ']' 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66130 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66130 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:19.390 killing process with pid 66130 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66130' 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66130 00:10:19.390 [2024-11-15 11:22:02.289904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.390 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66130 00:10:19.649 [2024-11-15 11:22:02.552413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.023 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.023 00:10:21.023 real 0m11.885s 00:10:21.023 user 0m19.614s 00:10:21.023 sys 0m1.713s 00:10:21.023 11:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.023 ************************************ 00:10:21.023 END TEST raid_state_function_test_sb 00:10:21.023 ************************************ 00:10:21.023 11:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.023 11:22:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:21.023 11:22:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:21.023 11:22:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.023 11:22:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.023 ************************************ 00:10:21.023 START TEST raid_superblock_test 00:10:21.023 ************************************ 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66761 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66761 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 66761 ']' 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:21.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.023 11:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.024 11:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:21.024 11:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.024 [2024-11-15 11:22:03.803852] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:10:21.024 [2024-11-15 11:22:03.804029] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66761 ] 00:10:21.281 [2024-11-15 11:22:03.976599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.281 [2024-11-15 11:22:04.125629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.538 [2024-11-15 11:22:04.340345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.538 [2024-11-15 11:22:04.340401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.104 malloc1 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.104 [2024-11-15 11:22:04.884409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.104 [2024-11-15 11:22:04.884499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.104 [2024-11-15 11:22:04.884548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:22.104 [2024-11-15 11:22:04.884574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.104 [2024-11-15 11:22:04.887505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.104 [2024-11-15 11:22:04.887564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.104 pt1 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.104 malloc2 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.104 [2024-11-15 11:22:04.937372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.104 [2024-11-15 11:22:04.937458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.104 [2024-11-15 11:22:04.937496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:22.104 [2024-11-15 11:22:04.937511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.104 [2024-11-15 11:22:04.940363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.104 [2024-11-15 11:22:04.940408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.104 pt2 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.104 malloc3 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.104 11:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.104 [2024-11-15 11:22:05.005985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.104 [2024-11-15 11:22:05.006051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.104 [2024-11-15 11:22:05.006086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:22.104 [2024-11-15 11:22:05.006103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.104 [2024-11-15 11:22:05.009141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.104 [2024-11-15 11:22:05.009236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.104 pt3 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.104 [2024-11-15 11:22:05.014100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.104 [2024-11-15 11:22:05.016800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.104 [2024-11-15 11:22:05.016903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.104 [2024-11-15 11:22:05.017131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:22.104 [2024-11-15 11:22:05.017165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:22.104 [2024-11-15 11:22:05.017519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:22.104 [2024-11-15 11:22:05.017750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:22.104 [2024-11-15 11:22:05.017774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:22.104 [2024-11-15 11:22:05.018078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.104 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.362 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.362 "name": "raid_bdev1", 00:10:22.362 "uuid": "e981fdac-47b8-4574-a3a6-def503a7aa35", 00:10:22.362 "strip_size_kb": 64, 00:10:22.362 "state": "online", 00:10:22.362 "raid_level": "concat", 00:10:22.362 "superblock": true, 00:10:22.362 "num_base_bdevs": 3, 00:10:22.362 "num_base_bdevs_discovered": 3, 00:10:22.362 "num_base_bdevs_operational": 3, 00:10:22.362 "base_bdevs_list": [ 00:10:22.362 { 00:10:22.362 "name": "pt1", 00:10:22.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.362 "is_configured": true, 00:10:22.362 "data_offset": 2048, 00:10:22.362 "data_size": 63488 00:10:22.362 }, 00:10:22.362 { 00:10:22.362 "name": "pt2", 00:10:22.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.362 "is_configured": true, 00:10:22.362 "data_offset": 2048, 00:10:22.362 "data_size": 63488 00:10:22.362 }, 00:10:22.362 { 00:10:22.362 "name": "pt3", 00:10:22.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.362 "is_configured": true, 00:10:22.362 "data_offset": 2048, 00:10:22.362 "data_size": 63488 00:10:22.362 } 00:10:22.362 ] 00:10:22.362 }' 00:10:22.362 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.362 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.620 [2024-11-15 11:22:05.542808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.620 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.878 "name": "raid_bdev1", 00:10:22.878 "aliases": [ 00:10:22.878 "e981fdac-47b8-4574-a3a6-def503a7aa35" 00:10:22.878 ], 00:10:22.878 "product_name": "Raid Volume", 00:10:22.878 "block_size": 512, 00:10:22.878 "num_blocks": 190464, 00:10:22.878 "uuid": "e981fdac-47b8-4574-a3a6-def503a7aa35", 00:10:22.878 "assigned_rate_limits": { 00:10:22.878 "rw_ios_per_sec": 0, 00:10:22.878 "rw_mbytes_per_sec": 0, 00:10:22.878 "r_mbytes_per_sec": 0, 00:10:22.878 "w_mbytes_per_sec": 0 00:10:22.878 }, 00:10:22.878 "claimed": false, 00:10:22.878 "zoned": false, 00:10:22.878 "supported_io_types": { 00:10:22.878 "read": true, 00:10:22.878 "write": true, 00:10:22.878 "unmap": true, 00:10:22.878 "flush": true, 00:10:22.878 "reset": true, 00:10:22.878 "nvme_admin": false, 00:10:22.878 "nvme_io": false, 00:10:22.878 "nvme_io_md": false, 00:10:22.878 "write_zeroes": true, 00:10:22.878 "zcopy": false, 00:10:22.878 "get_zone_info": false, 00:10:22.878 "zone_management": false, 00:10:22.878 "zone_append": false, 00:10:22.878 "compare": false, 00:10:22.878 "compare_and_write": false, 00:10:22.878 "abort": false, 00:10:22.878 "seek_hole": false, 00:10:22.878 "seek_data": false, 00:10:22.878 "copy": false, 00:10:22.878 "nvme_iov_md": false 00:10:22.878 }, 00:10:22.878 "memory_domains": [ 00:10:22.878 { 00:10:22.878 "dma_device_id": "system", 00:10:22.878 "dma_device_type": 1 00:10:22.878 }, 00:10:22.878 { 00:10:22.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.878 "dma_device_type": 2 00:10:22.878 }, 00:10:22.878 { 00:10:22.878 "dma_device_id": "system", 00:10:22.878 "dma_device_type": 1 00:10:22.878 }, 00:10:22.878 { 00:10:22.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.878 "dma_device_type": 2 00:10:22.878 }, 00:10:22.878 { 00:10:22.878 "dma_device_id": "system", 00:10:22.878 "dma_device_type": 1 00:10:22.878 }, 00:10:22.878 { 00:10:22.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.878 "dma_device_type": 2 00:10:22.878 } 00:10:22.878 ], 00:10:22.878 "driver_specific": { 00:10:22.878 "raid": { 00:10:22.878 "uuid": "e981fdac-47b8-4574-a3a6-def503a7aa35", 00:10:22.878 "strip_size_kb": 64, 00:10:22.878 "state": "online", 00:10:22.878 "raid_level": "concat", 00:10:22.878 "superblock": true, 00:10:22.878 "num_base_bdevs": 3, 00:10:22.878 "num_base_bdevs_discovered": 3, 00:10:22.878 "num_base_bdevs_operational": 3, 00:10:22.878 "base_bdevs_list": [ 00:10:22.878 { 00:10:22.878 "name": "pt1", 00:10:22.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.878 "is_configured": true, 00:10:22.878 "data_offset": 2048, 00:10:22.878 "data_size": 63488 00:10:22.878 }, 00:10:22.878 { 00:10:22.878 "name": "pt2", 00:10:22.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.878 "is_configured": true, 00:10:22.878 "data_offset": 2048, 00:10:22.878 "data_size": 63488 00:10:22.878 }, 00:10:22.878 { 00:10:22.878 "name": "pt3", 00:10:22.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.878 "is_configured": true, 00:10:22.878 "data_offset": 2048, 00:10:22.878 "data_size": 63488 00:10:22.878 } 00:10:22.878 ] 00:10:22.878 } 00:10:22.878 } 00:10:22.878 }' 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.878 pt2 00:10:22.878 pt3' 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.878 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:23.137 [2024-11-15 11:22:05.890790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e981fdac-47b8-4574-a3a6-def503a7aa35 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e981fdac-47b8-4574-a3a6-def503a7aa35 ']' 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 [2024-11-15 11:22:05.946453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.137 [2024-11-15 11:22:05.946533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.137 [2024-11-15 11:22:05.946639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.137 [2024-11-15 11:22:05.946722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.137 [2024-11-15 11:22:05.946737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 11:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:23.137 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.395 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.395 [2024-11-15 11:22:06.098580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:23.395 [2024-11-15 11:22:06.101295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:23.395 [2024-11-15 11:22:06.101373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:23.395 [2024-11-15 11:22:06.101480] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:23.395 [2024-11-15 11:22:06.101624] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:23.395 [2024-11-15 11:22:06.101663] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:23.395 [2024-11-15 11:22:06.101696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.395 [2024-11-15 11:22:06.101714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:23.395 request: 00:10:23.395 { 00:10:23.396 "name": "raid_bdev1", 00:10:23.396 "raid_level": "concat", 00:10:23.396 "base_bdevs": [ 00:10:23.396 "malloc1", 00:10:23.396 "malloc2", 00:10:23.396 "malloc3" 00:10:23.396 ], 00:10:23.396 "strip_size_kb": 64, 00:10:23.396 "superblock": false, 00:10:23.396 "method": "bdev_raid_create", 00:10:23.396 "req_id": 1 00:10:23.396 } 00:10:23.396 Got JSON-RPC error response 00:10:23.396 response: 00:10:23.396 { 00:10:23.396 "code": -17, 00:10:23.396 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:23.396 } 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.396 [2024-11-15 11:22:06.166713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.396 [2024-11-15 11:22:06.166819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.396 [2024-11-15 11:22:06.166851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:23.396 [2024-11-15 11:22:06.166866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.396 [2024-11-15 11:22:06.170233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.396 [2024-11-15 11:22:06.170304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.396 [2024-11-15 11:22:06.170453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:23.396 [2024-11-15 11:22:06.170567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:23.396 pt1 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.396 "name": "raid_bdev1", 00:10:23.396 "uuid": "e981fdac-47b8-4574-a3a6-def503a7aa35", 00:10:23.396 "strip_size_kb": 64, 00:10:23.396 "state": "configuring", 00:10:23.396 "raid_level": "concat", 00:10:23.396 "superblock": true, 00:10:23.396 "num_base_bdevs": 3, 00:10:23.396 "num_base_bdevs_discovered": 1, 00:10:23.396 "num_base_bdevs_operational": 3, 00:10:23.396 "base_bdevs_list": [ 00:10:23.396 { 00:10:23.396 "name": "pt1", 00:10:23.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.396 "is_configured": true, 00:10:23.396 "data_offset": 2048, 00:10:23.396 "data_size": 63488 00:10:23.396 }, 00:10:23.396 { 00:10:23.396 "name": null, 00:10:23.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.396 "is_configured": false, 00:10:23.396 "data_offset": 2048, 00:10:23.396 "data_size": 63488 00:10:23.396 }, 00:10:23.396 { 00:10:23.396 "name": null, 00:10:23.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.396 "is_configured": false, 00:10:23.396 "data_offset": 2048, 00:10:23.396 "data_size": 63488 00:10:23.396 } 00:10:23.396 ] 00:10:23.396 }' 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.396 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.963 [2024-11-15 11:22:06.726966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.963 [2024-11-15 11:22:06.727073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.963 [2024-11-15 11:22:06.727111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:23.963 [2024-11-15 11:22:06.727127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.963 [2024-11-15 11:22:06.727817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.963 [2024-11-15 11:22:06.727858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.963 [2024-11-15 11:22:06.727994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:23.963 [2024-11-15 11:22:06.728034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.963 pt2 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.963 [2024-11-15 11:22:06.734935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.963 "name": "raid_bdev1", 00:10:23.963 "uuid": "e981fdac-47b8-4574-a3a6-def503a7aa35", 00:10:23.963 "strip_size_kb": 64, 00:10:23.963 "state": "configuring", 00:10:23.963 "raid_level": "concat", 00:10:23.963 "superblock": true, 00:10:23.963 "num_base_bdevs": 3, 00:10:23.963 "num_base_bdevs_discovered": 1, 00:10:23.963 "num_base_bdevs_operational": 3, 00:10:23.963 "base_bdevs_list": [ 00:10:23.963 { 00:10:23.963 "name": "pt1", 00:10:23.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.963 "is_configured": true, 00:10:23.963 "data_offset": 2048, 00:10:23.963 "data_size": 63488 00:10:23.963 }, 00:10:23.963 { 00:10:23.963 "name": null, 00:10:23.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.963 "is_configured": false, 00:10:23.963 "data_offset": 0, 00:10:23.963 "data_size": 63488 00:10:23.963 }, 00:10:23.963 { 00:10:23.963 "name": null, 00:10:23.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.963 "is_configured": false, 00:10:23.963 "data_offset": 2048, 00:10:23.963 "data_size": 63488 00:10:23.963 } 00:10:23.963 ] 00:10:23.963 }' 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.963 11:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.531 [2024-11-15 11:22:07.271091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.531 [2024-11-15 11:22:07.271242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.531 [2024-11-15 11:22:07.271273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:24.531 [2024-11-15 11:22:07.271292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.531 [2024-11-15 11:22:07.271952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.531 [2024-11-15 11:22:07.271992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.531 [2024-11-15 11:22:07.272102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.531 [2024-11-15 11:22:07.272141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.531 pt2 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.531 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.531 [2024-11-15 11:22:07.279004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.531 [2024-11-15 11:22:07.279080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.531 [2024-11-15 11:22:07.279100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:24.531 [2024-11-15 11:22:07.279115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.532 [2024-11-15 11:22:07.279576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.532 [2024-11-15 11:22:07.279622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.532 [2024-11-15 11:22:07.279725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:24.532 [2024-11-15 11:22:07.279757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.532 [2024-11-15 11:22:07.279902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.532 [2024-11-15 11:22:07.279931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:24.532 [2024-11-15 11:22:07.280289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.532 [2024-11-15 11:22:07.280550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.532 [2024-11-15 11:22:07.280572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:24.532 [2024-11-15 11:22:07.280743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.532 pt3 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.532 "name": "raid_bdev1", 00:10:24.532 "uuid": "e981fdac-47b8-4574-a3a6-def503a7aa35", 00:10:24.532 "strip_size_kb": 64, 00:10:24.532 "state": "online", 00:10:24.532 "raid_level": "concat", 00:10:24.532 "superblock": true, 00:10:24.532 "num_base_bdevs": 3, 00:10:24.532 "num_base_bdevs_discovered": 3, 00:10:24.532 "num_base_bdevs_operational": 3, 00:10:24.532 "base_bdevs_list": [ 00:10:24.532 { 00:10:24.532 "name": "pt1", 00:10:24.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.532 "is_configured": true, 00:10:24.532 "data_offset": 2048, 00:10:24.532 "data_size": 63488 00:10:24.532 }, 00:10:24.532 { 00:10:24.532 "name": "pt2", 00:10:24.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.532 "is_configured": true, 00:10:24.532 "data_offset": 2048, 00:10:24.532 "data_size": 63488 00:10:24.532 }, 00:10:24.532 { 00:10:24.532 "name": "pt3", 00:10:24.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.532 "is_configured": true, 00:10:24.532 "data_offset": 2048, 00:10:24.532 "data_size": 63488 00:10:24.532 } 00:10:24.532 ] 00:10:24.532 }' 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.532 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.099 [2024-11-15 11:22:07.807714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.099 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.099 "name": "raid_bdev1", 00:10:25.099 "aliases": [ 00:10:25.099 "e981fdac-47b8-4574-a3a6-def503a7aa35" 00:10:25.099 ], 00:10:25.099 "product_name": "Raid Volume", 00:10:25.099 "block_size": 512, 00:10:25.099 "num_blocks": 190464, 00:10:25.099 "uuid": "e981fdac-47b8-4574-a3a6-def503a7aa35", 00:10:25.099 "assigned_rate_limits": { 00:10:25.099 "rw_ios_per_sec": 0, 00:10:25.099 "rw_mbytes_per_sec": 0, 00:10:25.099 "r_mbytes_per_sec": 0, 00:10:25.099 "w_mbytes_per_sec": 0 00:10:25.099 }, 00:10:25.099 "claimed": false, 00:10:25.099 "zoned": false, 00:10:25.099 "supported_io_types": { 00:10:25.099 "read": true, 00:10:25.099 "write": true, 00:10:25.099 "unmap": true, 00:10:25.100 "flush": true, 00:10:25.100 "reset": true, 00:10:25.100 "nvme_admin": false, 00:10:25.100 "nvme_io": false, 00:10:25.100 "nvme_io_md": false, 00:10:25.100 "write_zeroes": true, 00:10:25.100 "zcopy": false, 00:10:25.100 "get_zone_info": false, 00:10:25.100 "zone_management": false, 00:10:25.100 "zone_append": false, 00:10:25.100 "compare": false, 00:10:25.100 "compare_and_write": false, 00:10:25.100 "abort": false, 00:10:25.100 "seek_hole": false, 00:10:25.100 "seek_data": false, 00:10:25.100 "copy": false, 00:10:25.100 "nvme_iov_md": false 00:10:25.100 }, 00:10:25.100 "memory_domains": [ 00:10:25.100 { 00:10:25.100 "dma_device_id": "system", 00:10:25.100 "dma_device_type": 1 00:10:25.100 }, 00:10:25.100 { 00:10:25.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.100 "dma_device_type": 2 00:10:25.100 }, 00:10:25.100 { 00:10:25.100 "dma_device_id": "system", 00:10:25.100 "dma_device_type": 1 00:10:25.100 }, 00:10:25.100 { 00:10:25.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.100 "dma_device_type": 2 00:10:25.100 }, 00:10:25.100 { 00:10:25.100 "dma_device_id": "system", 00:10:25.100 "dma_device_type": 1 00:10:25.100 }, 00:10:25.100 { 00:10:25.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.100 "dma_device_type": 2 00:10:25.100 } 00:10:25.100 ], 00:10:25.100 "driver_specific": { 00:10:25.100 "raid": { 00:10:25.100 "uuid": "e981fdac-47b8-4574-a3a6-def503a7aa35", 00:10:25.100 "strip_size_kb": 64, 00:10:25.100 "state": "online", 00:10:25.100 "raid_level": "concat", 00:10:25.100 "superblock": true, 00:10:25.100 "num_base_bdevs": 3, 00:10:25.100 "num_base_bdevs_discovered": 3, 00:10:25.100 "num_base_bdevs_operational": 3, 00:10:25.100 "base_bdevs_list": [ 00:10:25.100 { 00:10:25.100 "name": "pt1", 00:10:25.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.100 "is_configured": true, 00:10:25.100 "data_offset": 2048, 00:10:25.100 "data_size": 63488 00:10:25.100 }, 00:10:25.100 { 00:10:25.100 "name": "pt2", 00:10:25.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.100 "is_configured": true, 00:10:25.100 "data_offset": 2048, 00:10:25.100 "data_size": 63488 00:10:25.100 }, 00:10:25.100 { 00:10:25.100 "name": "pt3", 00:10:25.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.100 "is_configured": true, 00:10:25.100 "data_offset": 2048, 00:10:25.100 "data_size": 63488 00:10:25.100 } 00:10:25.100 ] 00:10:25.100 } 00:10:25.100 } 00:10:25.100 }' 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.100 pt2 00:10:25.100 pt3' 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.100 11:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.100 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.100 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.100 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.100 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.100 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.100 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.100 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.100 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:25.357 [2024-11-15 11:22:08.135732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e981fdac-47b8-4574-a3a6-def503a7aa35 '!=' e981fdac-47b8-4574-a3a6-def503a7aa35 ']' 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66761 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 66761 ']' 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 66761 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66761 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:25.357 killing process with pid 66761 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66761' 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 66761 00:10:25.357 [2024-11-15 11:22:08.217343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.357 11:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 66761 00:10:25.357 [2024-11-15 11:22:08.217460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.357 [2024-11-15 11:22:08.217575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.357 [2024-11-15 11:22:08.217598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:25.614 [2024-11-15 11:22:08.485213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.984 11:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:26.984 00:10:26.984 real 0m5.832s 00:10:26.984 user 0m8.791s 00:10:26.984 sys 0m0.907s 00:10:26.984 11:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:26.984 11:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.984 ************************************ 00:10:26.984 END TEST raid_superblock_test 00:10:26.984 ************************************ 00:10:26.984 11:22:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:26.984 11:22:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:26.984 11:22:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:26.984 11:22:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.984 ************************************ 00:10:26.984 START TEST raid_read_error_test 00:10:26.984 ************************************ 00:10:26.984 11:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:10:26.984 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:26.984 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:26.984 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:26.984 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8aLBhYz97C 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67020 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67020 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67020 ']' 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:26.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:26.985 11:22:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.985 [2024-11-15 11:22:09.717378] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:10:26.985 [2024-11-15 11:22:09.717567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67020 ] 00:10:26.985 [2024-11-15 11:22:09.904764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.242 [2024-11-15 11:22:10.047096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.501 [2024-11-15 11:22:10.278219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.501 [2024-11-15 11:22:10.278345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.758 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:27.758 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:27.758 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.758 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:27.758 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.758 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.017 BaseBdev1_malloc 00:10:28.017 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.017 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.017 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.017 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.017 true 00:10:28.017 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.017 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 [2024-11-15 11:22:10.758830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.018 [2024-11-15 11:22:10.758927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.018 [2024-11-15 11:22:10.758960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:28.018 [2024-11-15 11:22:10.758977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.018 [2024-11-15 11:22:10.761884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.018 [2024-11-15 11:22:10.761970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.018 BaseBdev1 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 BaseBdev2_malloc 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 true 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 [2024-11-15 11:22:10.819698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:28.018 [2024-11-15 11:22:10.819806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.018 [2024-11-15 11:22:10.819836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:28.018 [2024-11-15 11:22:10.819869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.018 [2024-11-15 11:22:10.822869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.018 [2024-11-15 11:22:10.822947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.018 BaseBdev2 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 BaseBdev3_malloc 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 true 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 [2024-11-15 11:22:10.892447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:28.018 [2024-11-15 11:22:10.892537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.018 [2024-11-15 11:22:10.892570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:28.018 [2024-11-15 11:22:10.892588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.018 [2024-11-15 11:22:10.895563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.018 [2024-11-15 11:22:10.895644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:28.018 BaseBdev3 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 [2024-11-15 11:22:10.900664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.018 [2024-11-15 11:22:10.903311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.018 [2024-11-15 11:22:10.903421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.018 [2024-11-15 11:22:10.903776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:28.018 [2024-11-15 11:22:10.903834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:28.018 [2024-11-15 11:22:10.904225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:28.018 [2024-11-15 11:22:10.904491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:28.018 [2024-11-15 11:22:10.904523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:28.018 [2024-11-15 11:22:10.904795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.018 "name": "raid_bdev1", 00:10:28.018 "uuid": "bae14e9b-3315-41bc-95db-6f1b17a1d0ac", 00:10:28.018 "strip_size_kb": 64, 00:10:28.018 "state": "online", 00:10:28.018 "raid_level": "concat", 00:10:28.018 "superblock": true, 00:10:28.018 "num_base_bdevs": 3, 00:10:28.018 "num_base_bdevs_discovered": 3, 00:10:28.018 "num_base_bdevs_operational": 3, 00:10:28.018 "base_bdevs_list": [ 00:10:28.018 { 00:10:28.018 "name": "BaseBdev1", 00:10:28.018 "uuid": "a0d4b7fe-2b07-57a6-8cf9-f456e13b73c7", 00:10:28.018 "is_configured": true, 00:10:28.018 "data_offset": 2048, 00:10:28.018 "data_size": 63488 00:10:28.018 }, 00:10:28.018 { 00:10:28.018 "name": "BaseBdev2", 00:10:28.018 "uuid": "e6b0c35e-b612-5b88-ad67-cbb8b461c62b", 00:10:28.018 "is_configured": true, 00:10:28.018 "data_offset": 2048, 00:10:28.018 "data_size": 63488 00:10:28.018 }, 00:10:28.018 { 00:10:28.018 "name": "BaseBdev3", 00:10:28.018 "uuid": "2756369f-eae4-507d-9f7a-265c053646ae", 00:10:28.018 "is_configured": true, 00:10:28.018 "data_offset": 2048, 00:10:28.018 "data_size": 63488 00:10:28.018 } 00:10:28.018 ] 00:10:28.018 }' 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.018 11:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 11:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:28.584 11:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.844 [2024-11-15 11:22:11.538360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.783 "name": "raid_bdev1", 00:10:29.783 "uuid": "bae14e9b-3315-41bc-95db-6f1b17a1d0ac", 00:10:29.783 "strip_size_kb": 64, 00:10:29.783 "state": "online", 00:10:29.783 "raid_level": "concat", 00:10:29.783 "superblock": true, 00:10:29.783 "num_base_bdevs": 3, 00:10:29.783 "num_base_bdevs_discovered": 3, 00:10:29.783 "num_base_bdevs_operational": 3, 00:10:29.783 "base_bdevs_list": [ 00:10:29.783 { 00:10:29.783 "name": "BaseBdev1", 00:10:29.783 "uuid": "a0d4b7fe-2b07-57a6-8cf9-f456e13b73c7", 00:10:29.783 "is_configured": true, 00:10:29.783 "data_offset": 2048, 00:10:29.783 "data_size": 63488 00:10:29.783 }, 00:10:29.783 { 00:10:29.783 "name": "BaseBdev2", 00:10:29.783 "uuid": "e6b0c35e-b612-5b88-ad67-cbb8b461c62b", 00:10:29.783 "is_configured": true, 00:10:29.783 "data_offset": 2048, 00:10:29.783 "data_size": 63488 00:10:29.783 }, 00:10:29.783 { 00:10:29.783 "name": "BaseBdev3", 00:10:29.783 "uuid": "2756369f-eae4-507d-9f7a-265c053646ae", 00:10:29.783 "is_configured": true, 00:10:29.783 "data_offset": 2048, 00:10:29.783 "data_size": 63488 00:10:29.783 } 00:10:29.783 ] 00:10:29.783 }' 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.783 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.042 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.042 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.042 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.042 [2024-11-15 11:22:12.976931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.042 [2024-11-15 11:22:12.976971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.042 [2024-11-15 11:22:12.980573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.043 [2024-11-15 11:22:12.980664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.043 [2024-11-15 11:22:12.980733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.043 [2024-11-15 11:22:12.980747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:30.043 { 00:10:30.043 "results": [ 00:10:30.043 { 00:10:30.043 "job": "raid_bdev1", 00:10:30.043 "core_mask": "0x1", 00:10:30.043 "workload": "randrw", 00:10:30.043 "percentage": 50, 00:10:30.043 "status": "finished", 00:10:30.043 "queue_depth": 1, 00:10:30.043 "io_size": 131072, 00:10:30.043 "runtime": 1.436485, 00:10:30.043 "iops": 9781.51529601771, 00:10:30.043 "mibps": 1222.6894120022137, 00:10:30.043 "io_failed": 1, 00:10:30.043 "io_timeout": 0, 00:10:30.043 "avg_latency_us": 143.568414460575, 00:10:30.043 "min_latency_us": 35.60727272727273, 00:10:30.043 "max_latency_us": 1966.08 00:10:30.043 } 00:10:30.043 ], 00:10:30.043 "core_count": 1 00:10:30.043 } 00:10:30.043 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.043 11:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67020 00:10:30.043 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67020 ']' 00:10:30.043 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67020 00:10:30.043 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:30.043 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:30.043 11:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67020 00:10:30.302 11:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:30.302 11:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:30.302 killing process with pid 67020 00:10:30.302 11:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67020' 00:10:30.302 11:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67020 00:10:30.302 [2024-11-15 11:22:13.019922] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.302 11:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67020 00:10:30.302 [2024-11-15 11:22:13.234889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8aLBhYz97C 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:31.682 00:10:31.682 real 0m4.857s 00:10:31.682 user 0m5.911s 00:10:31.682 sys 0m0.698s 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.682 11:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.682 ************************************ 00:10:31.682 END TEST raid_read_error_test 00:10:31.682 ************************************ 00:10:31.682 11:22:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:31.683 11:22:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:31.683 11:22:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:31.683 11:22:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.683 ************************************ 00:10:31.683 START TEST raid_write_error_test 00:10:31.683 ************************************ 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mjkmECvPGO 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67171 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67171 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67171 ']' 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.683 11:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.955 [2024-11-15 11:22:14.635998] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:10:31.955 [2024-11-15 11:22:14.636864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67171 ] 00:10:31.955 [2024-11-15 11:22:14.828843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.213 [2024-11-15 11:22:14.985313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.472 [2024-11-15 11:22:15.211778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.472 [2024-11-15 11:22:15.211869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.731 BaseBdev1_malloc 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.731 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 true 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 [2024-11-15 11:22:15.689894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:32.991 [2024-11-15 11:22:15.690019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.991 [2024-11-15 11:22:15.690051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:32.991 [2024-11-15 11:22:15.690069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.991 [2024-11-15 11:22:15.693116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.991 [2024-11-15 11:22:15.693206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:32.991 BaseBdev1 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 BaseBdev2_malloc 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 true 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 [2024-11-15 11:22:15.757923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:32.991 [2024-11-15 11:22:15.758065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.991 [2024-11-15 11:22:15.758094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:32.991 [2024-11-15 11:22:15.758112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.991 [2024-11-15 11:22:15.761460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.991 [2024-11-15 11:22:15.761588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:32.991 BaseBdev2 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 BaseBdev3_malloc 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 true 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 [2024-11-15 11:22:15.834292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:32.991 [2024-11-15 11:22:15.834378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.991 [2024-11-15 11:22:15.834414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:32.991 [2024-11-15 11:22:15.834431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.991 [2024-11-15 11:22:15.837659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.991 [2024-11-15 11:22:15.837723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:32.991 BaseBdev3 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 [2024-11-15 11:22:15.842454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.991 [2024-11-15 11:22:15.845343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.991 [2024-11-15 11:22:15.845459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.991 [2024-11-15 11:22:15.845725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.991 [2024-11-15 11:22:15.845742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:32.991 [2024-11-15 11:22:15.846050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:32.991 [2024-11-15 11:22:15.846347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.991 [2024-11-15 11:22:15.846372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:32.991 [2024-11-15 11:22:15.846669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.991 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.992 "name": "raid_bdev1", 00:10:32.992 "uuid": "f2947ead-e661-4e53-abff-5bffc9437679", 00:10:32.992 "strip_size_kb": 64, 00:10:32.992 "state": "online", 00:10:32.992 "raid_level": "concat", 00:10:32.992 "superblock": true, 00:10:32.992 "num_base_bdevs": 3, 00:10:32.992 "num_base_bdevs_discovered": 3, 00:10:32.992 "num_base_bdevs_operational": 3, 00:10:32.992 "base_bdevs_list": [ 00:10:32.992 { 00:10:32.992 "name": "BaseBdev1", 00:10:32.992 "uuid": "51ee714c-e988-5340-b3a5-1b93f0901b08", 00:10:32.992 "is_configured": true, 00:10:32.992 "data_offset": 2048, 00:10:32.992 "data_size": 63488 00:10:32.992 }, 00:10:32.992 { 00:10:32.992 "name": "BaseBdev2", 00:10:32.992 "uuid": "a88773c7-c79f-5c7a-811c-8e49205033df", 00:10:32.992 "is_configured": true, 00:10:32.992 "data_offset": 2048, 00:10:32.992 "data_size": 63488 00:10:32.992 }, 00:10:32.992 { 00:10:32.992 "name": "BaseBdev3", 00:10:32.992 "uuid": "d474541a-f3e9-5a2c-8db2-0e7e2f2fe28f", 00:10:32.992 "is_configured": true, 00:10:32.992 "data_offset": 2048, 00:10:32.992 "data_size": 63488 00:10:32.992 } 00:10:32.992 ] 00:10:32.992 }' 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.992 11:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.559 11:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:33.559 11:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:33.559 [2024-11-15 11:22:16.496729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.569 "name": "raid_bdev1", 00:10:34.569 "uuid": "f2947ead-e661-4e53-abff-5bffc9437679", 00:10:34.569 "strip_size_kb": 64, 00:10:34.569 "state": "online", 00:10:34.569 "raid_level": "concat", 00:10:34.569 "superblock": true, 00:10:34.569 "num_base_bdevs": 3, 00:10:34.569 "num_base_bdevs_discovered": 3, 00:10:34.569 "num_base_bdevs_operational": 3, 00:10:34.569 "base_bdevs_list": [ 00:10:34.569 { 00:10:34.569 "name": "BaseBdev1", 00:10:34.569 "uuid": "51ee714c-e988-5340-b3a5-1b93f0901b08", 00:10:34.569 "is_configured": true, 00:10:34.569 "data_offset": 2048, 00:10:34.569 "data_size": 63488 00:10:34.569 }, 00:10:34.569 { 00:10:34.569 "name": "BaseBdev2", 00:10:34.569 "uuid": "a88773c7-c79f-5c7a-811c-8e49205033df", 00:10:34.569 "is_configured": true, 00:10:34.569 "data_offset": 2048, 00:10:34.569 "data_size": 63488 00:10:34.569 }, 00:10:34.569 { 00:10:34.569 "name": "BaseBdev3", 00:10:34.569 "uuid": "d474541a-f3e9-5a2c-8db2-0e7e2f2fe28f", 00:10:34.569 "is_configured": true, 00:10:34.569 "data_offset": 2048, 00:10:34.569 "data_size": 63488 00:10:34.569 } 00:10:34.569 ] 00:10:34.569 }' 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.569 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.138 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:35.138 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.138 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.138 [2024-11-15 11:22:17.927939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.138 [2024-11-15 11:22:17.928150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.138 [2024-11-15 11:22:17.931888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.138 [2024-11-15 11:22:17.932142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.138 [2024-11-15 11:22:17.932274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:10:35.138 "results": [ 00:10:35.138 { 00:10:35.138 "job": "raid_bdev1", 00:10:35.138 "core_mask": "0x1", 00:10:35.138 "workload": "randrw", 00:10:35.138 "percentage": 50, 00:10:35.138 "status": "finished", 00:10:35.138 "queue_depth": 1, 00:10:35.138 "io_size": 131072, 00:10:35.138 "runtime": 1.42858, 00:10:35.138 "iops": 9485.643086141483, 00:10:35.138 "mibps": 1185.7053857676854, 00:10:35.138 "io_failed": 1, 00:10:35.138 "io_timeout": 0, 00:10:35.139 "avg_latency_us": 147.4863668562842, 00:10:35.139 "min_latency_us": 37.236363636363635, 00:10:35.139 "max_latency_us": 1921.3963636363637 00:10:35.139 } 00:10:35.139 ], 00:10:35.139 "core_count": 1 00:10:35.139 } 00:10:35.139 ee all in destruct 00:10:35.139 [2024-11-15 11:22:17.932426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67171 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67171 ']' 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67171 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67171 00:10:35.139 killing process with pid 67171 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67171' 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67171 00:10:35.139 [2024-11-15 11:22:17.970843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.139 11:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67171 00:10:35.398 [2024-11-15 11:22:18.195555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mjkmECvPGO 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:36.776 ************************************ 00:10:36.776 END TEST raid_write_error_test 00:10:36.776 ************************************ 00:10:36.776 00:10:36.776 real 0m4.953s 00:10:36.776 user 0m6.050s 00:10:36.776 sys 0m0.668s 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:36.776 11:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.776 11:22:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:36.776 11:22:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:36.776 11:22:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:36.776 11:22:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.776 11:22:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.776 ************************************ 00:10:36.776 START TEST raid_state_function_test 00:10:36.776 ************************************ 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:36.776 Process raid pid: 67315 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67315 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67315' 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67315 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67315 ']' 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:36.776 11:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.776 [2024-11-15 11:22:19.617593] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:10:36.776 [2024-11-15 11:22:19.618009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.036 [2024-11-15 11:22:19.797266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.036 [2024-11-15 11:22:19.953459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.296 [2024-11-15 11:22:20.194611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.296 [2024-11-15 11:22:20.194680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.862 [2024-11-15 11:22:20.644631] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.862 [2024-11-15 11:22:20.644724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.862 [2024-11-15 11:22:20.644752] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.862 [2024-11-15 11:22:20.644768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.862 [2024-11-15 11:22:20.644777] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.862 [2024-11-15 11:22:20.644790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.862 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.862 "name": "Existed_Raid", 00:10:37.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.862 "strip_size_kb": 0, 00:10:37.862 "state": "configuring", 00:10:37.862 "raid_level": "raid1", 00:10:37.862 "superblock": false, 00:10:37.862 "num_base_bdevs": 3, 00:10:37.862 "num_base_bdevs_discovered": 0, 00:10:37.862 "num_base_bdevs_operational": 3, 00:10:37.862 "base_bdevs_list": [ 00:10:37.862 { 00:10:37.863 "name": "BaseBdev1", 00:10:37.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.863 "is_configured": false, 00:10:37.863 "data_offset": 0, 00:10:37.863 "data_size": 0 00:10:37.863 }, 00:10:37.863 { 00:10:37.863 "name": "BaseBdev2", 00:10:37.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.863 "is_configured": false, 00:10:37.863 "data_offset": 0, 00:10:37.863 "data_size": 0 00:10:37.863 }, 00:10:37.863 { 00:10:37.863 "name": "BaseBdev3", 00:10:37.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.863 "is_configured": false, 00:10:37.863 "data_offset": 0, 00:10:37.863 "data_size": 0 00:10:37.863 } 00:10:37.863 ] 00:10:37.863 }' 00:10:37.863 11:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.863 11:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.434 [2024-11-15 11:22:21.172771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.434 [2024-11-15 11:22:21.172818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.434 [2024-11-15 11:22:21.180718] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.434 [2024-11-15 11:22:21.180818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.434 [2024-11-15 11:22:21.180861] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.434 [2024-11-15 11:22:21.180875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.434 [2024-11-15 11:22:21.180884] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.434 [2024-11-15 11:22:21.180897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.434 [2024-11-15 11:22:21.230860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.434 BaseBdev1 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.434 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.434 [ 00:10:38.434 { 00:10:38.434 "name": "BaseBdev1", 00:10:38.434 "aliases": [ 00:10:38.434 "28504694-9b80-4629-b596-968c33499f57" 00:10:38.434 ], 00:10:38.434 "product_name": "Malloc disk", 00:10:38.434 "block_size": 512, 00:10:38.434 "num_blocks": 65536, 00:10:38.434 "uuid": "28504694-9b80-4629-b596-968c33499f57", 00:10:38.434 "assigned_rate_limits": { 00:10:38.434 "rw_ios_per_sec": 0, 00:10:38.434 "rw_mbytes_per_sec": 0, 00:10:38.434 "r_mbytes_per_sec": 0, 00:10:38.434 "w_mbytes_per_sec": 0 00:10:38.434 }, 00:10:38.434 "claimed": true, 00:10:38.434 "claim_type": "exclusive_write", 00:10:38.434 "zoned": false, 00:10:38.434 "supported_io_types": { 00:10:38.434 "read": true, 00:10:38.434 "write": true, 00:10:38.434 "unmap": true, 00:10:38.434 "flush": true, 00:10:38.434 "reset": true, 00:10:38.434 "nvme_admin": false, 00:10:38.434 "nvme_io": false, 00:10:38.434 "nvme_io_md": false, 00:10:38.434 "write_zeroes": true, 00:10:38.434 "zcopy": true, 00:10:38.434 "get_zone_info": false, 00:10:38.434 "zone_management": false, 00:10:38.434 "zone_append": false, 00:10:38.435 "compare": false, 00:10:38.435 "compare_and_write": false, 00:10:38.435 "abort": true, 00:10:38.435 "seek_hole": false, 00:10:38.435 "seek_data": false, 00:10:38.435 "copy": true, 00:10:38.435 "nvme_iov_md": false 00:10:38.435 }, 00:10:38.435 "memory_domains": [ 00:10:38.435 { 00:10:38.435 "dma_device_id": "system", 00:10:38.435 "dma_device_type": 1 00:10:38.435 }, 00:10:38.435 { 00:10:38.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.435 "dma_device_type": 2 00:10:38.435 } 00:10:38.435 ], 00:10:38.435 "driver_specific": {} 00:10:38.435 } 00:10:38.435 ] 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.435 "name": "Existed_Raid", 00:10:38.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.435 "strip_size_kb": 0, 00:10:38.435 "state": "configuring", 00:10:38.435 "raid_level": "raid1", 00:10:38.435 "superblock": false, 00:10:38.435 "num_base_bdevs": 3, 00:10:38.435 "num_base_bdevs_discovered": 1, 00:10:38.435 "num_base_bdevs_operational": 3, 00:10:38.435 "base_bdevs_list": [ 00:10:38.435 { 00:10:38.435 "name": "BaseBdev1", 00:10:38.435 "uuid": "28504694-9b80-4629-b596-968c33499f57", 00:10:38.435 "is_configured": true, 00:10:38.435 "data_offset": 0, 00:10:38.435 "data_size": 65536 00:10:38.435 }, 00:10:38.435 { 00:10:38.435 "name": "BaseBdev2", 00:10:38.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.435 "is_configured": false, 00:10:38.435 "data_offset": 0, 00:10:38.435 "data_size": 0 00:10:38.435 }, 00:10:38.435 { 00:10:38.435 "name": "BaseBdev3", 00:10:38.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.435 "is_configured": false, 00:10:38.435 "data_offset": 0, 00:10:38.435 "data_size": 0 00:10:38.435 } 00:10:38.435 ] 00:10:38.435 }' 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.435 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 [2024-11-15 11:22:21.803250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.003 [2024-11-15 11:22:21.803511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 [2024-11-15 11:22:21.811295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.003 [2024-11-15 11:22:21.814143] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.003 [2024-11-15 11:22:21.814227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.003 [2024-11-15 11:22:21.814247] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.003 [2024-11-15 11:22:21.814264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.003 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.004 "name": "Existed_Raid", 00:10:39.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.004 "strip_size_kb": 0, 00:10:39.004 "state": "configuring", 00:10:39.004 "raid_level": "raid1", 00:10:39.004 "superblock": false, 00:10:39.004 "num_base_bdevs": 3, 00:10:39.004 "num_base_bdevs_discovered": 1, 00:10:39.004 "num_base_bdevs_operational": 3, 00:10:39.004 "base_bdevs_list": [ 00:10:39.004 { 00:10:39.004 "name": "BaseBdev1", 00:10:39.004 "uuid": "28504694-9b80-4629-b596-968c33499f57", 00:10:39.004 "is_configured": true, 00:10:39.004 "data_offset": 0, 00:10:39.004 "data_size": 65536 00:10:39.004 }, 00:10:39.004 { 00:10:39.004 "name": "BaseBdev2", 00:10:39.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.004 "is_configured": false, 00:10:39.004 "data_offset": 0, 00:10:39.004 "data_size": 0 00:10:39.004 }, 00:10:39.004 { 00:10:39.004 "name": "BaseBdev3", 00:10:39.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.004 "is_configured": false, 00:10:39.004 "data_offset": 0, 00:10:39.004 "data_size": 0 00:10:39.004 } 00:10:39.004 ] 00:10:39.004 }' 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.004 11:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.571 [2024-11-15 11:22:22.398562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.571 BaseBdev2 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.571 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.571 [ 00:10:39.571 { 00:10:39.571 "name": "BaseBdev2", 00:10:39.571 "aliases": [ 00:10:39.571 "1f125a01-1d27-49f0-8eb3-b75be024a24d" 00:10:39.571 ], 00:10:39.571 "product_name": "Malloc disk", 00:10:39.571 "block_size": 512, 00:10:39.571 "num_blocks": 65536, 00:10:39.571 "uuid": "1f125a01-1d27-49f0-8eb3-b75be024a24d", 00:10:39.571 "assigned_rate_limits": { 00:10:39.571 "rw_ios_per_sec": 0, 00:10:39.571 "rw_mbytes_per_sec": 0, 00:10:39.571 "r_mbytes_per_sec": 0, 00:10:39.571 "w_mbytes_per_sec": 0 00:10:39.571 }, 00:10:39.571 "claimed": true, 00:10:39.571 "claim_type": "exclusive_write", 00:10:39.571 "zoned": false, 00:10:39.571 "supported_io_types": { 00:10:39.571 "read": true, 00:10:39.571 "write": true, 00:10:39.571 "unmap": true, 00:10:39.571 "flush": true, 00:10:39.571 "reset": true, 00:10:39.571 "nvme_admin": false, 00:10:39.571 "nvme_io": false, 00:10:39.571 "nvme_io_md": false, 00:10:39.571 "write_zeroes": true, 00:10:39.571 "zcopy": true, 00:10:39.571 "get_zone_info": false, 00:10:39.571 "zone_management": false, 00:10:39.572 "zone_append": false, 00:10:39.572 "compare": false, 00:10:39.572 "compare_and_write": false, 00:10:39.572 "abort": true, 00:10:39.572 "seek_hole": false, 00:10:39.572 "seek_data": false, 00:10:39.572 "copy": true, 00:10:39.572 "nvme_iov_md": false 00:10:39.572 }, 00:10:39.572 "memory_domains": [ 00:10:39.572 { 00:10:39.572 "dma_device_id": "system", 00:10:39.572 "dma_device_type": 1 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.572 "dma_device_type": 2 00:10:39.572 } 00:10:39.572 ], 00:10:39.572 "driver_specific": {} 00:10:39.572 } 00:10:39.572 ] 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.572 "name": "Existed_Raid", 00:10:39.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.572 "strip_size_kb": 0, 00:10:39.572 "state": "configuring", 00:10:39.572 "raid_level": "raid1", 00:10:39.572 "superblock": false, 00:10:39.572 "num_base_bdevs": 3, 00:10:39.572 "num_base_bdevs_discovered": 2, 00:10:39.572 "num_base_bdevs_operational": 3, 00:10:39.572 "base_bdevs_list": [ 00:10:39.572 { 00:10:39.572 "name": "BaseBdev1", 00:10:39.572 "uuid": "28504694-9b80-4629-b596-968c33499f57", 00:10:39.572 "is_configured": true, 00:10:39.572 "data_offset": 0, 00:10:39.572 "data_size": 65536 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "name": "BaseBdev2", 00:10:39.572 "uuid": "1f125a01-1d27-49f0-8eb3-b75be024a24d", 00:10:39.572 "is_configured": true, 00:10:39.572 "data_offset": 0, 00:10:39.572 "data_size": 65536 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "name": "BaseBdev3", 00:10:39.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.572 "is_configured": false, 00:10:39.572 "data_offset": 0, 00:10:39.572 "data_size": 0 00:10:39.572 } 00:10:39.572 ] 00:10:39.572 }' 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.572 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.140 11:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.140 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.140 11:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.140 [2024-11-15 11:22:23.030357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.140 [2024-11-15 11:22:23.030715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.140 [2024-11-15 11:22:23.030750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:40.140 [2024-11-15 11:22:23.031145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:40.140 [2024-11-15 11:22:23.031462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.140 [2024-11-15 11:22:23.031481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:40.140 [2024-11-15 11:22:23.031898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.140 BaseBdev3 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.140 [ 00:10:40.140 { 00:10:40.140 "name": "BaseBdev3", 00:10:40.140 "aliases": [ 00:10:40.140 "19f92413-73ce-4852-a9af-22b20b41e97e" 00:10:40.140 ], 00:10:40.140 "product_name": "Malloc disk", 00:10:40.140 "block_size": 512, 00:10:40.140 "num_blocks": 65536, 00:10:40.140 "uuid": "19f92413-73ce-4852-a9af-22b20b41e97e", 00:10:40.140 "assigned_rate_limits": { 00:10:40.140 "rw_ios_per_sec": 0, 00:10:40.140 "rw_mbytes_per_sec": 0, 00:10:40.140 "r_mbytes_per_sec": 0, 00:10:40.140 "w_mbytes_per_sec": 0 00:10:40.140 }, 00:10:40.140 "claimed": true, 00:10:40.140 "claim_type": "exclusive_write", 00:10:40.140 "zoned": false, 00:10:40.140 "supported_io_types": { 00:10:40.140 "read": true, 00:10:40.140 "write": true, 00:10:40.140 "unmap": true, 00:10:40.140 "flush": true, 00:10:40.140 "reset": true, 00:10:40.140 "nvme_admin": false, 00:10:40.140 "nvme_io": false, 00:10:40.140 "nvme_io_md": false, 00:10:40.140 "write_zeroes": true, 00:10:40.140 "zcopy": true, 00:10:40.140 "get_zone_info": false, 00:10:40.140 "zone_management": false, 00:10:40.140 "zone_append": false, 00:10:40.140 "compare": false, 00:10:40.140 "compare_and_write": false, 00:10:40.140 "abort": true, 00:10:40.140 "seek_hole": false, 00:10:40.140 "seek_data": false, 00:10:40.140 "copy": true, 00:10:40.140 "nvme_iov_md": false 00:10:40.140 }, 00:10:40.140 "memory_domains": [ 00:10:40.140 { 00:10:40.140 "dma_device_id": "system", 00:10:40.140 "dma_device_type": 1 00:10:40.140 }, 00:10:40.140 { 00:10:40.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.140 "dma_device_type": 2 00:10:40.140 } 00:10:40.140 ], 00:10:40.140 "driver_specific": {} 00:10:40.140 } 00:10:40.140 ] 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.140 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.400 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.400 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.400 "name": "Existed_Raid", 00:10:40.400 "uuid": "e5b6c59a-afe9-4c64-a167-3dc522030844", 00:10:40.400 "strip_size_kb": 0, 00:10:40.400 "state": "online", 00:10:40.400 "raid_level": "raid1", 00:10:40.400 "superblock": false, 00:10:40.400 "num_base_bdevs": 3, 00:10:40.400 "num_base_bdevs_discovered": 3, 00:10:40.400 "num_base_bdevs_operational": 3, 00:10:40.400 "base_bdevs_list": [ 00:10:40.400 { 00:10:40.400 "name": "BaseBdev1", 00:10:40.400 "uuid": "28504694-9b80-4629-b596-968c33499f57", 00:10:40.400 "is_configured": true, 00:10:40.400 "data_offset": 0, 00:10:40.400 "data_size": 65536 00:10:40.400 }, 00:10:40.400 { 00:10:40.400 "name": "BaseBdev2", 00:10:40.400 "uuid": "1f125a01-1d27-49f0-8eb3-b75be024a24d", 00:10:40.400 "is_configured": true, 00:10:40.400 "data_offset": 0, 00:10:40.400 "data_size": 65536 00:10:40.400 }, 00:10:40.400 { 00:10:40.400 "name": "BaseBdev3", 00:10:40.400 "uuid": "19f92413-73ce-4852-a9af-22b20b41e97e", 00:10:40.400 "is_configured": true, 00:10:40.400 "data_offset": 0, 00:10:40.400 "data_size": 65536 00:10:40.400 } 00:10:40.400 ] 00:10:40.400 }' 00:10:40.400 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.400 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.659 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.659 [2024-11-15 11:22:23.603213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.949 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.949 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.949 "name": "Existed_Raid", 00:10:40.949 "aliases": [ 00:10:40.949 "e5b6c59a-afe9-4c64-a167-3dc522030844" 00:10:40.949 ], 00:10:40.949 "product_name": "Raid Volume", 00:10:40.949 "block_size": 512, 00:10:40.949 "num_blocks": 65536, 00:10:40.949 "uuid": "e5b6c59a-afe9-4c64-a167-3dc522030844", 00:10:40.949 "assigned_rate_limits": { 00:10:40.950 "rw_ios_per_sec": 0, 00:10:40.950 "rw_mbytes_per_sec": 0, 00:10:40.950 "r_mbytes_per_sec": 0, 00:10:40.950 "w_mbytes_per_sec": 0 00:10:40.950 }, 00:10:40.950 "claimed": false, 00:10:40.950 "zoned": false, 00:10:40.950 "supported_io_types": { 00:10:40.950 "read": true, 00:10:40.950 "write": true, 00:10:40.950 "unmap": false, 00:10:40.950 "flush": false, 00:10:40.950 "reset": true, 00:10:40.950 "nvme_admin": false, 00:10:40.950 "nvme_io": false, 00:10:40.950 "nvme_io_md": false, 00:10:40.950 "write_zeroes": true, 00:10:40.950 "zcopy": false, 00:10:40.950 "get_zone_info": false, 00:10:40.950 "zone_management": false, 00:10:40.950 "zone_append": false, 00:10:40.950 "compare": false, 00:10:40.950 "compare_and_write": false, 00:10:40.950 "abort": false, 00:10:40.950 "seek_hole": false, 00:10:40.950 "seek_data": false, 00:10:40.950 "copy": false, 00:10:40.950 "nvme_iov_md": false 00:10:40.950 }, 00:10:40.950 "memory_domains": [ 00:10:40.950 { 00:10:40.950 "dma_device_id": "system", 00:10:40.950 "dma_device_type": 1 00:10:40.950 }, 00:10:40.950 { 00:10:40.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.950 "dma_device_type": 2 00:10:40.950 }, 00:10:40.950 { 00:10:40.950 "dma_device_id": "system", 00:10:40.950 "dma_device_type": 1 00:10:40.950 }, 00:10:40.950 { 00:10:40.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.950 "dma_device_type": 2 00:10:40.950 }, 00:10:40.950 { 00:10:40.950 "dma_device_id": "system", 00:10:40.950 "dma_device_type": 1 00:10:40.950 }, 00:10:40.950 { 00:10:40.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.950 "dma_device_type": 2 00:10:40.950 } 00:10:40.950 ], 00:10:40.950 "driver_specific": { 00:10:40.950 "raid": { 00:10:40.950 "uuid": "e5b6c59a-afe9-4c64-a167-3dc522030844", 00:10:40.950 "strip_size_kb": 0, 00:10:40.950 "state": "online", 00:10:40.950 "raid_level": "raid1", 00:10:40.950 "superblock": false, 00:10:40.950 "num_base_bdevs": 3, 00:10:40.950 "num_base_bdevs_discovered": 3, 00:10:40.950 "num_base_bdevs_operational": 3, 00:10:40.950 "base_bdevs_list": [ 00:10:40.950 { 00:10:40.950 "name": "BaseBdev1", 00:10:40.950 "uuid": "28504694-9b80-4629-b596-968c33499f57", 00:10:40.950 "is_configured": true, 00:10:40.950 "data_offset": 0, 00:10:40.950 "data_size": 65536 00:10:40.950 }, 00:10:40.950 { 00:10:40.950 "name": "BaseBdev2", 00:10:40.950 "uuid": "1f125a01-1d27-49f0-8eb3-b75be024a24d", 00:10:40.950 "is_configured": true, 00:10:40.950 "data_offset": 0, 00:10:40.950 "data_size": 65536 00:10:40.950 }, 00:10:40.950 { 00:10:40.950 "name": "BaseBdev3", 00:10:40.950 "uuid": "19f92413-73ce-4852-a9af-22b20b41e97e", 00:10:40.950 "is_configured": true, 00:10:40.950 "data_offset": 0, 00:10:40.950 "data_size": 65536 00:10:40.950 } 00:10:40.950 ] 00:10:40.950 } 00:10:40.950 } 00:10:40.950 }' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.950 BaseBdev2 00:10:40.950 BaseBdev3' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.208 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.208 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.208 11:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.208 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.208 11:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.208 [2024-11-15 11:22:23.922770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.208 "name": "Existed_Raid", 00:10:41.208 "uuid": "e5b6c59a-afe9-4c64-a167-3dc522030844", 00:10:41.208 "strip_size_kb": 0, 00:10:41.208 "state": "online", 00:10:41.208 "raid_level": "raid1", 00:10:41.208 "superblock": false, 00:10:41.208 "num_base_bdevs": 3, 00:10:41.208 "num_base_bdevs_discovered": 2, 00:10:41.208 "num_base_bdevs_operational": 2, 00:10:41.208 "base_bdevs_list": [ 00:10:41.208 { 00:10:41.208 "name": null, 00:10:41.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.208 "is_configured": false, 00:10:41.208 "data_offset": 0, 00:10:41.208 "data_size": 65536 00:10:41.208 }, 00:10:41.208 { 00:10:41.208 "name": "BaseBdev2", 00:10:41.208 "uuid": "1f125a01-1d27-49f0-8eb3-b75be024a24d", 00:10:41.208 "is_configured": true, 00:10:41.208 "data_offset": 0, 00:10:41.208 "data_size": 65536 00:10:41.208 }, 00:10:41.208 { 00:10:41.208 "name": "BaseBdev3", 00:10:41.208 "uuid": "19f92413-73ce-4852-a9af-22b20b41e97e", 00:10:41.208 "is_configured": true, 00:10:41.208 "data_offset": 0, 00:10:41.208 "data_size": 65536 00:10:41.208 } 00:10:41.208 ] 00:10:41.208 }' 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.208 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.775 [2024-11-15 11:22:24.616062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.775 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.034 [2024-11-15 11:22:24.770076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:42.034 [2024-11-15 11:22:24.770332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.034 [2024-11-15 11:22:24.859620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.034 [2024-11-15 11:22:24.859700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.034 [2024-11-15 11:22:24.859736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.034 BaseBdev2 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.034 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:42.035 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.035 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.035 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.035 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.035 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.035 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.293 [ 00:10:42.293 { 00:10:42.293 "name": "BaseBdev2", 00:10:42.293 "aliases": [ 00:10:42.293 "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093" 00:10:42.293 ], 00:10:42.293 "product_name": "Malloc disk", 00:10:42.293 "block_size": 512, 00:10:42.293 "num_blocks": 65536, 00:10:42.293 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:42.293 "assigned_rate_limits": { 00:10:42.293 "rw_ios_per_sec": 0, 00:10:42.293 "rw_mbytes_per_sec": 0, 00:10:42.293 "r_mbytes_per_sec": 0, 00:10:42.293 "w_mbytes_per_sec": 0 00:10:42.293 }, 00:10:42.293 "claimed": false, 00:10:42.293 "zoned": false, 00:10:42.293 "supported_io_types": { 00:10:42.293 "read": true, 00:10:42.293 "write": true, 00:10:42.293 "unmap": true, 00:10:42.293 "flush": true, 00:10:42.293 "reset": true, 00:10:42.293 "nvme_admin": false, 00:10:42.293 "nvme_io": false, 00:10:42.293 "nvme_io_md": false, 00:10:42.293 "write_zeroes": true, 00:10:42.293 "zcopy": true, 00:10:42.293 "get_zone_info": false, 00:10:42.293 "zone_management": false, 00:10:42.293 "zone_append": false, 00:10:42.293 "compare": false, 00:10:42.293 "compare_and_write": false, 00:10:42.293 "abort": true, 00:10:42.293 "seek_hole": false, 00:10:42.293 "seek_data": false, 00:10:42.293 "copy": true, 00:10:42.293 "nvme_iov_md": false 00:10:42.293 }, 00:10:42.293 "memory_domains": [ 00:10:42.293 { 00:10:42.293 "dma_device_id": "system", 00:10:42.293 "dma_device_type": 1 00:10:42.293 }, 00:10:42.293 { 00:10:42.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.293 "dma_device_type": 2 00:10:42.293 } 00:10:42.293 ], 00:10:42.293 "driver_specific": {} 00:10:42.293 } 00:10:42.293 ] 00:10:42.293 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.293 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:42.293 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.293 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.293 11:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.293 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.293 11:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.293 BaseBdev3 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.293 [ 00:10:42.293 { 00:10:42.293 "name": "BaseBdev3", 00:10:42.293 "aliases": [ 00:10:42.293 "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e" 00:10:42.293 ], 00:10:42.293 "product_name": "Malloc disk", 00:10:42.293 "block_size": 512, 00:10:42.293 "num_blocks": 65536, 00:10:42.293 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:42.293 "assigned_rate_limits": { 00:10:42.293 "rw_ios_per_sec": 0, 00:10:42.293 "rw_mbytes_per_sec": 0, 00:10:42.293 "r_mbytes_per_sec": 0, 00:10:42.293 "w_mbytes_per_sec": 0 00:10:42.293 }, 00:10:42.293 "claimed": false, 00:10:42.293 "zoned": false, 00:10:42.293 "supported_io_types": { 00:10:42.293 "read": true, 00:10:42.293 "write": true, 00:10:42.293 "unmap": true, 00:10:42.293 "flush": true, 00:10:42.293 "reset": true, 00:10:42.293 "nvme_admin": false, 00:10:42.293 "nvme_io": false, 00:10:42.293 "nvme_io_md": false, 00:10:42.293 "write_zeroes": true, 00:10:42.293 "zcopy": true, 00:10:42.293 "get_zone_info": false, 00:10:42.293 "zone_management": false, 00:10:42.293 "zone_append": false, 00:10:42.293 "compare": false, 00:10:42.293 "compare_and_write": false, 00:10:42.293 "abort": true, 00:10:42.293 "seek_hole": false, 00:10:42.293 "seek_data": false, 00:10:42.293 "copy": true, 00:10:42.293 "nvme_iov_md": false 00:10:42.293 }, 00:10:42.293 "memory_domains": [ 00:10:42.293 { 00:10:42.293 "dma_device_id": "system", 00:10:42.293 "dma_device_type": 1 00:10:42.293 }, 00:10:42.293 { 00:10:42.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.293 "dma_device_type": 2 00:10:42.293 } 00:10:42.293 ], 00:10:42.293 "driver_specific": {} 00:10:42.293 } 00:10:42.293 ] 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.293 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.293 [2024-11-15 11:22:25.086792] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.293 [2024-11-15 11:22:25.087044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.293 [2024-11-15 11:22:25.087086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.294 [2024-11-15 11:22:25.089924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.294 "name": "Existed_Raid", 00:10:42.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.294 "strip_size_kb": 0, 00:10:42.294 "state": "configuring", 00:10:42.294 "raid_level": "raid1", 00:10:42.294 "superblock": false, 00:10:42.294 "num_base_bdevs": 3, 00:10:42.294 "num_base_bdevs_discovered": 2, 00:10:42.294 "num_base_bdevs_operational": 3, 00:10:42.294 "base_bdevs_list": [ 00:10:42.294 { 00:10:42.294 "name": "BaseBdev1", 00:10:42.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.294 "is_configured": false, 00:10:42.294 "data_offset": 0, 00:10:42.294 "data_size": 0 00:10:42.294 }, 00:10:42.294 { 00:10:42.294 "name": "BaseBdev2", 00:10:42.294 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:42.294 "is_configured": true, 00:10:42.294 "data_offset": 0, 00:10:42.294 "data_size": 65536 00:10:42.294 }, 00:10:42.294 { 00:10:42.294 "name": "BaseBdev3", 00:10:42.294 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:42.294 "is_configured": true, 00:10:42.294 "data_offset": 0, 00:10:42.294 "data_size": 65536 00:10:42.294 } 00:10:42.294 ] 00:10:42.294 }' 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.294 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.862 [2024-11-15 11:22:25.623120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.862 "name": "Existed_Raid", 00:10:42.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.862 "strip_size_kb": 0, 00:10:42.862 "state": "configuring", 00:10:42.862 "raid_level": "raid1", 00:10:42.862 "superblock": false, 00:10:42.862 "num_base_bdevs": 3, 00:10:42.862 "num_base_bdevs_discovered": 1, 00:10:42.862 "num_base_bdevs_operational": 3, 00:10:42.862 "base_bdevs_list": [ 00:10:42.862 { 00:10:42.862 "name": "BaseBdev1", 00:10:42.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.862 "is_configured": false, 00:10:42.862 "data_offset": 0, 00:10:42.862 "data_size": 0 00:10:42.862 }, 00:10:42.862 { 00:10:42.862 "name": null, 00:10:42.862 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:42.862 "is_configured": false, 00:10:42.862 "data_offset": 0, 00:10:42.862 "data_size": 65536 00:10:42.862 }, 00:10:42.862 { 00:10:42.862 "name": "BaseBdev3", 00:10:42.862 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:42.862 "is_configured": true, 00:10:42.862 "data_offset": 0, 00:10:42.862 "data_size": 65536 00:10:42.862 } 00:10:42.862 ] 00:10:42.862 }' 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.862 11:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.431 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.431 [2024-11-15 11:22:26.245929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.431 BaseBdev1 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.432 [ 00:10:43.432 { 00:10:43.432 "name": "BaseBdev1", 00:10:43.432 "aliases": [ 00:10:43.432 "b7c09b41-9f0e-4409-8d87-753899c3d460" 00:10:43.432 ], 00:10:43.432 "product_name": "Malloc disk", 00:10:43.432 "block_size": 512, 00:10:43.432 "num_blocks": 65536, 00:10:43.432 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:43.432 "assigned_rate_limits": { 00:10:43.432 "rw_ios_per_sec": 0, 00:10:43.432 "rw_mbytes_per_sec": 0, 00:10:43.432 "r_mbytes_per_sec": 0, 00:10:43.432 "w_mbytes_per_sec": 0 00:10:43.432 }, 00:10:43.432 "claimed": true, 00:10:43.432 "claim_type": "exclusive_write", 00:10:43.432 "zoned": false, 00:10:43.432 "supported_io_types": { 00:10:43.432 "read": true, 00:10:43.432 "write": true, 00:10:43.432 "unmap": true, 00:10:43.432 "flush": true, 00:10:43.432 "reset": true, 00:10:43.432 "nvme_admin": false, 00:10:43.432 "nvme_io": false, 00:10:43.432 "nvme_io_md": false, 00:10:43.432 "write_zeroes": true, 00:10:43.432 "zcopy": true, 00:10:43.432 "get_zone_info": false, 00:10:43.432 "zone_management": false, 00:10:43.432 "zone_append": false, 00:10:43.432 "compare": false, 00:10:43.432 "compare_and_write": false, 00:10:43.432 "abort": true, 00:10:43.432 "seek_hole": false, 00:10:43.432 "seek_data": false, 00:10:43.432 "copy": true, 00:10:43.432 "nvme_iov_md": false 00:10:43.432 }, 00:10:43.432 "memory_domains": [ 00:10:43.432 { 00:10:43.432 "dma_device_id": "system", 00:10:43.432 "dma_device_type": 1 00:10:43.432 }, 00:10:43.432 { 00:10:43.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.432 "dma_device_type": 2 00:10:43.432 } 00:10:43.432 ], 00:10:43.432 "driver_specific": {} 00:10:43.432 } 00:10:43.432 ] 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.432 "name": "Existed_Raid", 00:10:43.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.432 "strip_size_kb": 0, 00:10:43.432 "state": "configuring", 00:10:43.432 "raid_level": "raid1", 00:10:43.432 "superblock": false, 00:10:43.432 "num_base_bdevs": 3, 00:10:43.432 "num_base_bdevs_discovered": 2, 00:10:43.432 "num_base_bdevs_operational": 3, 00:10:43.432 "base_bdevs_list": [ 00:10:43.432 { 00:10:43.432 "name": "BaseBdev1", 00:10:43.432 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:43.432 "is_configured": true, 00:10:43.432 "data_offset": 0, 00:10:43.432 "data_size": 65536 00:10:43.432 }, 00:10:43.432 { 00:10:43.432 "name": null, 00:10:43.432 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:43.432 "is_configured": false, 00:10:43.432 "data_offset": 0, 00:10:43.432 "data_size": 65536 00:10:43.432 }, 00:10:43.432 { 00:10:43.432 "name": "BaseBdev3", 00:10:43.432 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:43.432 "is_configured": true, 00:10:43.432 "data_offset": 0, 00:10:43.432 "data_size": 65536 00:10:43.432 } 00:10:43.432 ] 00:10:43.432 }' 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.432 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.000 [2024-11-15 11:22:26.850340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.000 "name": "Existed_Raid", 00:10:44.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.000 "strip_size_kb": 0, 00:10:44.000 "state": "configuring", 00:10:44.000 "raid_level": "raid1", 00:10:44.000 "superblock": false, 00:10:44.000 "num_base_bdevs": 3, 00:10:44.000 "num_base_bdevs_discovered": 1, 00:10:44.000 "num_base_bdevs_operational": 3, 00:10:44.000 "base_bdevs_list": [ 00:10:44.000 { 00:10:44.000 "name": "BaseBdev1", 00:10:44.000 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:44.000 "is_configured": true, 00:10:44.000 "data_offset": 0, 00:10:44.000 "data_size": 65536 00:10:44.000 }, 00:10:44.000 { 00:10:44.000 "name": null, 00:10:44.000 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:44.000 "is_configured": false, 00:10:44.000 "data_offset": 0, 00:10:44.000 "data_size": 65536 00:10:44.000 }, 00:10:44.000 { 00:10:44.000 "name": null, 00:10:44.000 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:44.000 "is_configured": false, 00:10:44.000 "data_offset": 0, 00:10:44.000 "data_size": 65536 00:10:44.000 } 00:10:44.000 ] 00:10:44.000 }' 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.000 11:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.568 [2024-11-15 11:22:27.426671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.568 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.568 "name": "Existed_Raid", 00:10:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.568 "strip_size_kb": 0, 00:10:44.568 "state": "configuring", 00:10:44.568 "raid_level": "raid1", 00:10:44.568 "superblock": false, 00:10:44.568 "num_base_bdevs": 3, 00:10:44.568 "num_base_bdevs_discovered": 2, 00:10:44.568 "num_base_bdevs_operational": 3, 00:10:44.568 "base_bdevs_list": [ 00:10:44.568 { 00:10:44.568 "name": "BaseBdev1", 00:10:44.568 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:44.569 "is_configured": true, 00:10:44.569 "data_offset": 0, 00:10:44.569 "data_size": 65536 00:10:44.569 }, 00:10:44.569 { 00:10:44.569 "name": null, 00:10:44.569 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:44.569 "is_configured": false, 00:10:44.569 "data_offset": 0, 00:10:44.569 "data_size": 65536 00:10:44.569 }, 00:10:44.569 { 00:10:44.569 "name": "BaseBdev3", 00:10:44.569 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:44.569 "is_configured": true, 00:10:44.569 "data_offset": 0, 00:10:44.569 "data_size": 65536 00:10:44.569 } 00:10:44.569 ] 00:10:44.569 }' 00:10:44.569 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.569 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.135 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:45.135 11:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.135 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.135 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.135 11:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.135 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:45.135 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.135 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.135 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.135 [2024-11-15 11:22:28.014964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.393 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.393 "name": "Existed_Raid", 00:10:45.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.393 "strip_size_kb": 0, 00:10:45.393 "state": "configuring", 00:10:45.393 "raid_level": "raid1", 00:10:45.393 "superblock": false, 00:10:45.393 "num_base_bdevs": 3, 00:10:45.393 "num_base_bdevs_discovered": 1, 00:10:45.393 "num_base_bdevs_operational": 3, 00:10:45.393 "base_bdevs_list": [ 00:10:45.393 { 00:10:45.393 "name": null, 00:10:45.393 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:45.393 "is_configured": false, 00:10:45.393 "data_offset": 0, 00:10:45.393 "data_size": 65536 00:10:45.393 }, 00:10:45.393 { 00:10:45.393 "name": null, 00:10:45.393 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:45.393 "is_configured": false, 00:10:45.393 "data_offset": 0, 00:10:45.393 "data_size": 65536 00:10:45.394 }, 00:10:45.394 { 00:10:45.394 "name": "BaseBdev3", 00:10:45.394 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:45.394 "is_configured": true, 00:10:45.394 "data_offset": 0, 00:10:45.394 "data_size": 65536 00:10:45.394 } 00:10:45.394 ] 00:10:45.394 }' 00:10:45.394 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.394 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 [2024-11-15 11:22:28.721227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.961 "name": "Existed_Raid", 00:10:45.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.961 "strip_size_kb": 0, 00:10:45.961 "state": "configuring", 00:10:45.961 "raid_level": "raid1", 00:10:45.961 "superblock": false, 00:10:45.961 "num_base_bdevs": 3, 00:10:45.961 "num_base_bdevs_discovered": 2, 00:10:45.961 "num_base_bdevs_operational": 3, 00:10:45.961 "base_bdevs_list": [ 00:10:45.961 { 00:10:45.961 "name": null, 00:10:45.961 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:45.961 "is_configured": false, 00:10:45.961 "data_offset": 0, 00:10:45.961 "data_size": 65536 00:10:45.961 }, 00:10:45.961 { 00:10:45.961 "name": "BaseBdev2", 00:10:45.961 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:45.961 "is_configured": true, 00:10:45.961 "data_offset": 0, 00:10:45.961 "data_size": 65536 00:10:45.961 }, 00:10:45.961 { 00:10:45.961 "name": "BaseBdev3", 00:10:45.961 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:45.961 "is_configured": true, 00:10:45.961 "data_offset": 0, 00:10:45.961 "data_size": 65536 00:10:45.961 } 00:10:45.961 ] 00:10:45.961 }' 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.961 11:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b7c09b41-9f0e-4409-8d87-753899c3d460 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.574 [2024-11-15 11:22:29.376554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:46.574 [2024-11-15 11:22:29.376675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:46.574 [2024-11-15 11:22:29.376689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:46.574 [2024-11-15 11:22:29.377091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:46.574 [2024-11-15 11:22:29.377281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:46.574 [2024-11-15 11:22:29.377301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:46.574 NewBaseBdev 00:10:46.574 [2024-11-15 11:22:29.377690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.574 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.575 [ 00:10:46.575 { 00:10:46.575 "name": "NewBaseBdev", 00:10:46.575 "aliases": [ 00:10:46.575 "b7c09b41-9f0e-4409-8d87-753899c3d460" 00:10:46.575 ], 00:10:46.575 "product_name": "Malloc disk", 00:10:46.575 "block_size": 512, 00:10:46.575 "num_blocks": 65536, 00:10:46.575 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:46.575 "assigned_rate_limits": { 00:10:46.575 "rw_ios_per_sec": 0, 00:10:46.575 "rw_mbytes_per_sec": 0, 00:10:46.575 "r_mbytes_per_sec": 0, 00:10:46.575 "w_mbytes_per_sec": 0 00:10:46.575 }, 00:10:46.575 "claimed": true, 00:10:46.575 "claim_type": "exclusive_write", 00:10:46.575 "zoned": false, 00:10:46.575 "supported_io_types": { 00:10:46.575 "read": true, 00:10:46.575 "write": true, 00:10:46.575 "unmap": true, 00:10:46.575 "flush": true, 00:10:46.575 "reset": true, 00:10:46.575 "nvme_admin": false, 00:10:46.575 "nvme_io": false, 00:10:46.575 "nvme_io_md": false, 00:10:46.575 "write_zeroes": true, 00:10:46.575 "zcopy": true, 00:10:46.575 "get_zone_info": false, 00:10:46.575 "zone_management": false, 00:10:46.575 "zone_append": false, 00:10:46.575 "compare": false, 00:10:46.575 "compare_and_write": false, 00:10:46.575 "abort": true, 00:10:46.575 "seek_hole": false, 00:10:46.575 "seek_data": false, 00:10:46.575 "copy": true, 00:10:46.575 "nvme_iov_md": false 00:10:46.575 }, 00:10:46.575 "memory_domains": [ 00:10:46.575 { 00:10:46.575 "dma_device_id": "system", 00:10:46.575 "dma_device_type": 1 00:10:46.575 }, 00:10:46.575 { 00:10:46.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.575 "dma_device_type": 2 00:10:46.575 } 00:10:46.575 ], 00:10:46.575 "driver_specific": {} 00:10:46.575 } 00:10:46.575 ] 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.575 "name": "Existed_Raid", 00:10:46.575 "uuid": "27451c32-9208-4458-9f08-910963e38dc9", 00:10:46.575 "strip_size_kb": 0, 00:10:46.575 "state": "online", 00:10:46.575 "raid_level": "raid1", 00:10:46.575 "superblock": false, 00:10:46.575 "num_base_bdevs": 3, 00:10:46.575 "num_base_bdevs_discovered": 3, 00:10:46.575 "num_base_bdevs_operational": 3, 00:10:46.575 "base_bdevs_list": [ 00:10:46.575 { 00:10:46.575 "name": "NewBaseBdev", 00:10:46.575 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:46.575 "is_configured": true, 00:10:46.575 "data_offset": 0, 00:10:46.575 "data_size": 65536 00:10:46.575 }, 00:10:46.575 { 00:10:46.575 "name": "BaseBdev2", 00:10:46.575 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:46.575 "is_configured": true, 00:10:46.575 "data_offset": 0, 00:10:46.575 "data_size": 65536 00:10:46.575 }, 00:10:46.575 { 00:10:46.575 "name": "BaseBdev3", 00:10:46.575 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:46.575 "is_configured": true, 00:10:46.575 "data_offset": 0, 00:10:46.575 "data_size": 65536 00:10:46.575 } 00:10:46.575 ] 00:10:46.575 }' 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.575 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.155 [2024-11-15 11:22:29.921240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.155 "name": "Existed_Raid", 00:10:47.155 "aliases": [ 00:10:47.155 "27451c32-9208-4458-9f08-910963e38dc9" 00:10:47.155 ], 00:10:47.155 "product_name": "Raid Volume", 00:10:47.155 "block_size": 512, 00:10:47.155 "num_blocks": 65536, 00:10:47.155 "uuid": "27451c32-9208-4458-9f08-910963e38dc9", 00:10:47.155 "assigned_rate_limits": { 00:10:47.155 "rw_ios_per_sec": 0, 00:10:47.155 "rw_mbytes_per_sec": 0, 00:10:47.155 "r_mbytes_per_sec": 0, 00:10:47.155 "w_mbytes_per_sec": 0 00:10:47.155 }, 00:10:47.155 "claimed": false, 00:10:47.155 "zoned": false, 00:10:47.155 "supported_io_types": { 00:10:47.155 "read": true, 00:10:47.155 "write": true, 00:10:47.155 "unmap": false, 00:10:47.155 "flush": false, 00:10:47.155 "reset": true, 00:10:47.155 "nvme_admin": false, 00:10:47.155 "nvme_io": false, 00:10:47.155 "nvme_io_md": false, 00:10:47.155 "write_zeroes": true, 00:10:47.155 "zcopy": false, 00:10:47.155 "get_zone_info": false, 00:10:47.155 "zone_management": false, 00:10:47.155 "zone_append": false, 00:10:47.155 "compare": false, 00:10:47.155 "compare_and_write": false, 00:10:47.155 "abort": false, 00:10:47.155 "seek_hole": false, 00:10:47.155 "seek_data": false, 00:10:47.155 "copy": false, 00:10:47.155 "nvme_iov_md": false 00:10:47.155 }, 00:10:47.155 "memory_domains": [ 00:10:47.155 { 00:10:47.155 "dma_device_id": "system", 00:10:47.155 "dma_device_type": 1 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.155 "dma_device_type": 2 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "dma_device_id": "system", 00:10:47.155 "dma_device_type": 1 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.155 "dma_device_type": 2 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "dma_device_id": "system", 00:10:47.155 "dma_device_type": 1 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.155 "dma_device_type": 2 00:10:47.155 } 00:10:47.155 ], 00:10:47.155 "driver_specific": { 00:10:47.155 "raid": { 00:10:47.155 "uuid": "27451c32-9208-4458-9f08-910963e38dc9", 00:10:47.155 "strip_size_kb": 0, 00:10:47.155 "state": "online", 00:10:47.155 "raid_level": "raid1", 00:10:47.155 "superblock": false, 00:10:47.155 "num_base_bdevs": 3, 00:10:47.155 "num_base_bdevs_discovered": 3, 00:10:47.155 "num_base_bdevs_operational": 3, 00:10:47.155 "base_bdevs_list": [ 00:10:47.155 { 00:10:47.155 "name": "NewBaseBdev", 00:10:47.155 "uuid": "b7c09b41-9f0e-4409-8d87-753899c3d460", 00:10:47.155 "is_configured": true, 00:10:47.155 "data_offset": 0, 00:10:47.155 "data_size": 65536 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "name": "BaseBdev2", 00:10:47.155 "uuid": "49f2e0a1-cdcb-4b8d-bbc2-f218fc756093", 00:10:47.155 "is_configured": true, 00:10:47.155 "data_offset": 0, 00:10:47.155 "data_size": 65536 00:10:47.155 }, 00:10:47.155 { 00:10:47.155 "name": "BaseBdev3", 00:10:47.155 "uuid": "fe8a057b-b4ee-4a0b-8cb4-ae6a4411017e", 00:10:47.155 "is_configured": true, 00:10:47.155 "data_offset": 0, 00:10:47.155 "data_size": 65536 00:10:47.155 } 00:10:47.155 ] 00:10:47.155 } 00:10:47.155 } 00:10:47.155 }' 00:10:47.155 11:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:47.155 BaseBdev2 00:10:47.155 BaseBdev3' 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.155 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.414 [2024-11-15 11:22:30.232870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.414 [2024-11-15 11:22:30.232911] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.414 [2024-11-15 11:22:30.232999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.414 [2024-11-15 11:22:30.233459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.414 [2024-11-15 11:22:30.233477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67315 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67315 ']' 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67315 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67315 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67315' 00:10:47.414 killing process with pid 67315 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67315 00:10:47.414 [2024-11-15 11:22:30.272660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.414 11:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67315 00:10:47.672 [2024-11-15 11:22:30.508460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.611 11:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:48.611 00:10:48.611 real 0m12.045s 00:10:48.611 user 0m19.896s 00:10:48.611 sys 0m1.766s 00:10:48.611 11:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:48.611 11:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.611 ************************************ 00:10:48.611 END TEST raid_state_function_test 00:10:48.611 ************************************ 00:10:48.870 11:22:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:48.870 11:22:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:48.870 11:22:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:48.870 11:22:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.870 ************************************ 00:10:48.870 START TEST raid_state_function_test_sb 00:10:48.870 ************************************ 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:48.870 Process raid pid: 67951 00:10:48.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67951 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67951' 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67951 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 67951 ']' 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.870 11:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.870 [2024-11-15 11:22:31.741415] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:10:48.870 [2024-11-15 11:22:31.741844] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.129 [2024-11-15 11:22:31.929157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.129 [2024-11-15 11:22:32.069643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.388 [2024-11-15 11:22:32.292331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.388 [2024-11-15 11:22:32.292396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.954 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:49.954 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:49.954 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.954 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.954 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.954 [2024-11-15 11:22:32.670951] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.954 [2024-11-15 11:22:32.671236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.954 [2024-11-15 11:22:32.671268] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.954 [2024-11-15 11:22:32.671288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.954 [2024-11-15 11:22:32.671300] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.954 [2024-11-15 11:22:32.671315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.955 "name": "Existed_Raid", 00:10:49.955 "uuid": "5a87d8fb-d93c-4570-a904-e2e925302509", 00:10:49.955 "strip_size_kb": 0, 00:10:49.955 "state": "configuring", 00:10:49.955 "raid_level": "raid1", 00:10:49.955 "superblock": true, 00:10:49.955 "num_base_bdevs": 3, 00:10:49.955 "num_base_bdevs_discovered": 0, 00:10:49.955 "num_base_bdevs_operational": 3, 00:10:49.955 "base_bdevs_list": [ 00:10:49.955 { 00:10:49.955 "name": "BaseBdev1", 00:10:49.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.955 "is_configured": false, 00:10:49.955 "data_offset": 0, 00:10:49.955 "data_size": 0 00:10:49.955 }, 00:10:49.955 { 00:10:49.955 "name": "BaseBdev2", 00:10:49.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.955 "is_configured": false, 00:10:49.955 "data_offset": 0, 00:10:49.955 "data_size": 0 00:10:49.955 }, 00:10:49.955 { 00:10:49.955 "name": "BaseBdev3", 00:10:49.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.955 "is_configured": false, 00:10:49.955 "data_offset": 0, 00:10:49.955 "data_size": 0 00:10:49.955 } 00:10:49.955 ] 00:10:49.955 }' 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.955 11:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.523 [2024-11-15 11:22:33.195050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.523 [2024-11-15 11:22:33.195275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.523 [2024-11-15 11:22:33.203031] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.523 [2024-11-15 11:22:33.203269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.523 [2024-11-15 11:22:33.203305] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.523 [2024-11-15 11:22:33.203325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.523 [2024-11-15 11:22:33.203336] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.523 [2024-11-15 11:22:33.203352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.523 [2024-11-15 11:22:33.253389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.523 BaseBdev1 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.523 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.523 [ 00:10:50.523 { 00:10:50.523 "name": "BaseBdev1", 00:10:50.523 "aliases": [ 00:10:50.523 "b4ec947a-d31e-4f9c-adcd-8c57fcea7986" 00:10:50.523 ], 00:10:50.523 "product_name": "Malloc disk", 00:10:50.523 "block_size": 512, 00:10:50.523 "num_blocks": 65536, 00:10:50.523 "uuid": "b4ec947a-d31e-4f9c-adcd-8c57fcea7986", 00:10:50.523 "assigned_rate_limits": { 00:10:50.523 "rw_ios_per_sec": 0, 00:10:50.523 "rw_mbytes_per_sec": 0, 00:10:50.523 "r_mbytes_per_sec": 0, 00:10:50.523 "w_mbytes_per_sec": 0 00:10:50.523 }, 00:10:50.523 "claimed": true, 00:10:50.523 "claim_type": "exclusive_write", 00:10:50.523 "zoned": false, 00:10:50.523 "supported_io_types": { 00:10:50.523 "read": true, 00:10:50.523 "write": true, 00:10:50.523 "unmap": true, 00:10:50.523 "flush": true, 00:10:50.523 "reset": true, 00:10:50.523 "nvme_admin": false, 00:10:50.524 "nvme_io": false, 00:10:50.524 "nvme_io_md": false, 00:10:50.524 "write_zeroes": true, 00:10:50.524 "zcopy": true, 00:10:50.524 "get_zone_info": false, 00:10:50.524 "zone_management": false, 00:10:50.524 "zone_append": false, 00:10:50.524 "compare": false, 00:10:50.524 "compare_and_write": false, 00:10:50.524 "abort": true, 00:10:50.524 "seek_hole": false, 00:10:50.524 "seek_data": false, 00:10:50.524 "copy": true, 00:10:50.524 "nvme_iov_md": false 00:10:50.524 }, 00:10:50.524 "memory_domains": [ 00:10:50.524 { 00:10:50.524 "dma_device_id": "system", 00:10:50.524 "dma_device_type": 1 00:10:50.524 }, 00:10:50.524 { 00:10:50.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.524 "dma_device_type": 2 00:10:50.524 } 00:10:50.524 ], 00:10:50.524 "driver_specific": {} 00:10:50.524 } 00:10:50.524 ] 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.524 "name": "Existed_Raid", 00:10:50.524 "uuid": "473ce265-645a-4915-a36a-809b43456bae", 00:10:50.524 "strip_size_kb": 0, 00:10:50.524 "state": "configuring", 00:10:50.524 "raid_level": "raid1", 00:10:50.524 "superblock": true, 00:10:50.524 "num_base_bdevs": 3, 00:10:50.524 "num_base_bdevs_discovered": 1, 00:10:50.524 "num_base_bdevs_operational": 3, 00:10:50.524 "base_bdevs_list": [ 00:10:50.524 { 00:10:50.524 "name": "BaseBdev1", 00:10:50.524 "uuid": "b4ec947a-d31e-4f9c-adcd-8c57fcea7986", 00:10:50.524 "is_configured": true, 00:10:50.524 "data_offset": 2048, 00:10:50.524 "data_size": 63488 00:10:50.524 }, 00:10:50.524 { 00:10:50.524 "name": "BaseBdev2", 00:10:50.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.524 "is_configured": false, 00:10:50.524 "data_offset": 0, 00:10:50.524 "data_size": 0 00:10:50.524 }, 00:10:50.524 { 00:10:50.524 "name": "BaseBdev3", 00:10:50.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.524 "is_configured": false, 00:10:50.524 "data_offset": 0, 00:10:50.524 "data_size": 0 00:10:50.524 } 00:10:50.524 ] 00:10:50.524 }' 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.524 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.092 [2024-11-15 11:22:33.805546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.092 [2024-11-15 11:22:33.805611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.092 [2024-11-15 11:22:33.813651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.092 [2024-11-15 11:22:33.816479] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.092 [2024-11-15 11:22:33.816774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.092 [2024-11-15 11:22:33.816903] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:51.092 [2024-11-15 11:22:33.816967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.092 "name": "Existed_Raid", 00:10:51.092 "uuid": "4cd51b3c-f95f-435f-b92d-5e2de3c44c57", 00:10:51.092 "strip_size_kb": 0, 00:10:51.092 "state": "configuring", 00:10:51.092 "raid_level": "raid1", 00:10:51.092 "superblock": true, 00:10:51.092 "num_base_bdevs": 3, 00:10:51.092 "num_base_bdevs_discovered": 1, 00:10:51.092 "num_base_bdevs_operational": 3, 00:10:51.092 "base_bdevs_list": [ 00:10:51.092 { 00:10:51.092 "name": "BaseBdev1", 00:10:51.092 "uuid": "b4ec947a-d31e-4f9c-adcd-8c57fcea7986", 00:10:51.092 "is_configured": true, 00:10:51.092 "data_offset": 2048, 00:10:51.092 "data_size": 63488 00:10:51.092 }, 00:10:51.092 { 00:10:51.092 "name": "BaseBdev2", 00:10:51.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.092 "is_configured": false, 00:10:51.092 "data_offset": 0, 00:10:51.092 "data_size": 0 00:10:51.092 }, 00:10:51.092 { 00:10:51.092 "name": "BaseBdev3", 00:10:51.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.092 "is_configured": false, 00:10:51.092 "data_offset": 0, 00:10:51.092 "data_size": 0 00:10:51.092 } 00:10:51.092 ] 00:10:51.092 }' 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.092 11:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.660 [2024-11-15 11:22:34.376017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.660 BaseBdev2 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.660 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.660 [ 00:10:51.660 { 00:10:51.660 "name": "BaseBdev2", 00:10:51.660 "aliases": [ 00:10:51.660 "1d7da151-805b-4ace-b882-af9f14a881dd" 00:10:51.660 ], 00:10:51.660 "product_name": "Malloc disk", 00:10:51.660 "block_size": 512, 00:10:51.660 "num_blocks": 65536, 00:10:51.660 "uuid": "1d7da151-805b-4ace-b882-af9f14a881dd", 00:10:51.660 "assigned_rate_limits": { 00:10:51.660 "rw_ios_per_sec": 0, 00:10:51.660 "rw_mbytes_per_sec": 0, 00:10:51.660 "r_mbytes_per_sec": 0, 00:10:51.660 "w_mbytes_per_sec": 0 00:10:51.660 }, 00:10:51.660 "claimed": true, 00:10:51.660 "claim_type": "exclusive_write", 00:10:51.660 "zoned": false, 00:10:51.660 "supported_io_types": { 00:10:51.661 "read": true, 00:10:51.661 "write": true, 00:10:51.661 "unmap": true, 00:10:51.661 "flush": true, 00:10:51.661 "reset": true, 00:10:51.661 "nvme_admin": false, 00:10:51.661 "nvme_io": false, 00:10:51.661 "nvme_io_md": false, 00:10:51.661 "write_zeroes": true, 00:10:51.661 "zcopy": true, 00:10:51.661 "get_zone_info": false, 00:10:51.661 "zone_management": false, 00:10:51.661 "zone_append": false, 00:10:51.661 "compare": false, 00:10:51.661 "compare_and_write": false, 00:10:51.661 "abort": true, 00:10:51.661 "seek_hole": false, 00:10:51.661 "seek_data": false, 00:10:51.661 "copy": true, 00:10:51.661 "nvme_iov_md": false 00:10:51.661 }, 00:10:51.661 "memory_domains": [ 00:10:51.661 { 00:10:51.661 "dma_device_id": "system", 00:10:51.661 "dma_device_type": 1 00:10:51.661 }, 00:10:51.661 { 00:10:51.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.661 "dma_device_type": 2 00:10:51.661 } 00:10:51.661 ], 00:10:51.661 "driver_specific": {} 00:10:51.661 } 00:10:51.661 ] 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.661 "name": "Existed_Raid", 00:10:51.661 "uuid": "4cd51b3c-f95f-435f-b92d-5e2de3c44c57", 00:10:51.661 "strip_size_kb": 0, 00:10:51.661 "state": "configuring", 00:10:51.661 "raid_level": "raid1", 00:10:51.661 "superblock": true, 00:10:51.661 "num_base_bdevs": 3, 00:10:51.661 "num_base_bdevs_discovered": 2, 00:10:51.661 "num_base_bdevs_operational": 3, 00:10:51.661 "base_bdevs_list": [ 00:10:51.661 { 00:10:51.661 "name": "BaseBdev1", 00:10:51.661 "uuid": "b4ec947a-d31e-4f9c-adcd-8c57fcea7986", 00:10:51.661 "is_configured": true, 00:10:51.661 "data_offset": 2048, 00:10:51.661 "data_size": 63488 00:10:51.661 }, 00:10:51.661 { 00:10:51.661 "name": "BaseBdev2", 00:10:51.661 "uuid": "1d7da151-805b-4ace-b882-af9f14a881dd", 00:10:51.661 "is_configured": true, 00:10:51.661 "data_offset": 2048, 00:10:51.661 "data_size": 63488 00:10:51.661 }, 00:10:51.661 { 00:10:51.661 "name": "BaseBdev3", 00:10:51.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.661 "is_configured": false, 00:10:51.661 "data_offset": 0, 00:10:51.661 "data_size": 0 00:10:51.661 } 00:10:51.661 ] 00:10:51.661 }' 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.661 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 [2024-11-15 11:22:34.986644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.229 [2024-11-15 11:22:34.986970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:52.229 [2024-11-15 11:22:34.987001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.229 [2024-11-15 11:22:34.987451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:52.229 BaseBdev3 00:10:52.229 [2024-11-15 11:22:34.987719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:52.229 [2024-11-15 11:22:34.987737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:52.229 [2024-11-15 11:22:34.987920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.229 11:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 [ 00:10:52.229 { 00:10:52.229 "name": "BaseBdev3", 00:10:52.229 "aliases": [ 00:10:52.229 "c9c6c2dd-2c82-4c0d-aad4-6583b2c1f033" 00:10:52.229 ], 00:10:52.229 "product_name": "Malloc disk", 00:10:52.229 "block_size": 512, 00:10:52.229 "num_blocks": 65536, 00:10:52.229 "uuid": "c9c6c2dd-2c82-4c0d-aad4-6583b2c1f033", 00:10:52.229 "assigned_rate_limits": { 00:10:52.229 "rw_ios_per_sec": 0, 00:10:52.229 "rw_mbytes_per_sec": 0, 00:10:52.229 "r_mbytes_per_sec": 0, 00:10:52.229 "w_mbytes_per_sec": 0 00:10:52.229 }, 00:10:52.229 "claimed": true, 00:10:52.229 "claim_type": "exclusive_write", 00:10:52.229 "zoned": false, 00:10:52.229 "supported_io_types": { 00:10:52.230 "read": true, 00:10:52.230 "write": true, 00:10:52.230 "unmap": true, 00:10:52.230 "flush": true, 00:10:52.230 "reset": true, 00:10:52.230 "nvme_admin": false, 00:10:52.230 "nvme_io": false, 00:10:52.230 "nvme_io_md": false, 00:10:52.230 "write_zeroes": true, 00:10:52.230 "zcopy": true, 00:10:52.230 "get_zone_info": false, 00:10:52.230 "zone_management": false, 00:10:52.230 "zone_append": false, 00:10:52.230 "compare": false, 00:10:52.230 "compare_and_write": false, 00:10:52.230 "abort": true, 00:10:52.230 "seek_hole": false, 00:10:52.230 "seek_data": false, 00:10:52.230 "copy": true, 00:10:52.230 "nvme_iov_md": false 00:10:52.230 }, 00:10:52.230 "memory_domains": [ 00:10:52.230 { 00:10:52.230 "dma_device_id": "system", 00:10:52.230 "dma_device_type": 1 00:10:52.230 }, 00:10:52.230 { 00:10:52.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.230 "dma_device_type": 2 00:10:52.230 } 00:10:52.230 ], 00:10:52.230 "driver_specific": {} 00:10:52.230 } 00:10:52.230 ] 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.230 "name": "Existed_Raid", 00:10:52.230 "uuid": "4cd51b3c-f95f-435f-b92d-5e2de3c44c57", 00:10:52.230 "strip_size_kb": 0, 00:10:52.230 "state": "online", 00:10:52.230 "raid_level": "raid1", 00:10:52.230 "superblock": true, 00:10:52.230 "num_base_bdevs": 3, 00:10:52.230 "num_base_bdevs_discovered": 3, 00:10:52.230 "num_base_bdevs_operational": 3, 00:10:52.230 "base_bdevs_list": [ 00:10:52.230 { 00:10:52.230 "name": "BaseBdev1", 00:10:52.230 "uuid": "b4ec947a-d31e-4f9c-adcd-8c57fcea7986", 00:10:52.230 "is_configured": true, 00:10:52.230 "data_offset": 2048, 00:10:52.230 "data_size": 63488 00:10:52.230 }, 00:10:52.230 { 00:10:52.230 "name": "BaseBdev2", 00:10:52.230 "uuid": "1d7da151-805b-4ace-b882-af9f14a881dd", 00:10:52.230 "is_configured": true, 00:10:52.230 "data_offset": 2048, 00:10:52.230 "data_size": 63488 00:10:52.230 }, 00:10:52.230 { 00:10:52.230 "name": "BaseBdev3", 00:10:52.230 "uuid": "c9c6c2dd-2c82-4c0d-aad4-6583b2c1f033", 00:10:52.230 "is_configured": true, 00:10:52.230 "data_offset": 2048, 00:10:52.230 "data_size": 63488 00:10:52.230 } 00:10:52.230 ] 00:10:52.230 }' 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.230 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.798 [2024-11-15 11:22:35.559337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.798 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.798 "name": "Existed_Raid", 00:10:52.798 "aliases": [ 00:10:52.798 "4cd51b3c-f95f-435f-b92d-5e2de3c44c57" 00:10:52.798 ], 00:10:52.798 "product_name": "Raid Volume", 00:10:52.798 "block_size": 512, 00:10:52.798 "num_blocks": 63488, 00:10:52.798 "uuid": "4cd51b3c-f95f-435f-b92d-5e2de3c44c57", 00:10:52.798 "assigned_rate_limits": { 00:10:52.798 "rw_ios_per_sec": 0, 00:10:52.798 "rw_mbytes_per_sec": 0, 00:10:52.798 "r_mbytes_per_sec": 0, 00:10:52.798 "w_mbytes_per_sec": 0 00:10:52.798 }, 00:10:52.798 "claimed": false, 00:10:52.798 "zoned": false, 00:10:52.798 "supported_io_types": { 00:10:52.798 "read": true, 00:10:52.798 "write": true, 00:10:52.798 "unmap": false, 00:10:52.798 "flush": false, 00:10:52.798 "reset": true, 00:10:52.798 "nvme_admin": false, 00:10:52.798 "nvme_io": false, 00:10:52.798 "nvme_io_md": false, 00:10:52.798 "write_zeroes": true, 00:10:52.798 "zcopy": false, 00:10:52.799 "get_zone_info": false, 00:10:52.799 "zone_management": false, 00:10:52.799 "zone_append": false, 00:10:52.799 "compare": false, 00:10:52.799 "compare_and_write": false, 00:10:52.799 "abort": false, 00:10:52.799 "seek_hole": false, 00:10:52.799 "seek_data": false, 00:10:52.799 "copy": false, 00:10:52.799 "nvme_iov_md": false 00:10:52.799 }, 00:10:52.799 "memory_domains": [ 00:10:52.799 { 00:10:52.799 "dma_device_id": "system", 00:10:52.799 "dma_device_type": 1 00:10:52.799 }, 00:10:52.799 { 00:10:52.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.799 "dma_device_type": 2 00:10:52.799 }, 00:10:52.799 { 00:10:52.799 "dma_device_id": "system", 00:10:52.799 "dma_device_type": 1 00:10:52.799 }, 00:10:52.799 { 00:10:52.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.799 "dma_device_type": 2 00:10:52.799 }, 00:10:52.799 { 00:10:52.799 "dma_device_id": "system", 00:10:52.799 "dma_device_type": 1 00:10:52.799 }, 00:10:52.799 { 00:10:52.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.799 "dma_device_type": 2 00:10:52.799 } 00:10:52.799 ], 00:10:52.799 "driver_specific": { 00:10:52.799 "raid": { 00:10:52.799 "uuid": "4cd51b3c-f95f-435f-b92d-5e2de3c44c57", 00:10:52.799 "strip_size_kb": 0, 00:10:52.799 "state": "online", 00:10:52.799 "raid_level": "raid1", 00:10:52.799 "superblock": true, 00:10:52.799 "num_base_bdevs": 3, 00:10:52.799 "num_base_bdevs_discovered": 3, 00:10:52.799 "num_base_bdevs_operational": 3, 00:10:52.799 "base_bdevs_list": [ 00:10:52.799 { 00:10:52.799 "name": "BaseBdev1", 00:10:52.799 "uuid": "b4ec947a-d31e-4f9c-adcd-8c57fcea7986", 00:10:52.799 "is_configured": true, 00:10:52.799 "data_offset": 2048, 00:10:52.799 "data_size": 63488 00:10:52.799 }, 00:10:52.799 { 00:10:52.799 "name": "BaseBdev2", 00:10:52.799 "uuid": "1d7da151-805b-4ace-b882-af9f14a881dd", 00:10:52.799 "is_configured": true, 00:10:52.799 "data_offset": 2048, 00:10:52.799 "data_size": 63488 00:10:52.799 }, 00:10:52.799 { 00:10:52.799 "name": "BaseBdev3", 00:10:52.799 "uuid": "c9c6c2dd-2c82-4c0d-aad4-6583b2c1f033", 00:10:52.799 "is_configured": true, 00:10:52.799 "data_offset": 2048, 00:10:52.799 "data_size": 63488 00:10:52.799 } 00:10:52.799 ] 00:10:52.799 } 00:10:52.799 } 00:10:52.799 }' 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:52.799 BaseBdev2 00:10:52.799 BaseBdev3' 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.799 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.059 [2024-11-15 11:22:35.866981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.059 11:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.059 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.059 "name": "Existed_Raid", 00:10:53.059 "uuid": "4cd51b3c-f95f-435f-b92d-5e2de3c44c57", 00:10:53.059 "strip_size_kb": 0, 00:10:53.059 "state": "online", 00:10:53.059 "raid_level": "raid1", 00:10:53.059 "superblock": true, 00:10:53.059 "num_base_bdevs": 3, 00:10:53.059 "num_base_bdevs_discovered": 2, 00:10:53.059 "num_base_bdevs_operational": 2, 00:10:53.059 "base_bdevs_list": [ 00:10:53.059 { 00:10:53.059 "name": null, 00:10:53.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.059 "is_configured": false, 00:10:53.059 "data_offset": 0, 00:10:53.059 "data_size": 63488 00:10:53.059 }, 00:10:53.059 { 00:10:53.059 "name": "BaseBdev2", 00:10:53.059 "uuid": "1d7da151-805b-4ace-b882-af9f14a881dd", 00:10:53.059 "is_configured": true, 00:10:53.059 "data_offset": 2048, 00:10:53.059 "data_size": 63488 00:10:53.059 }, 00:10:53.059 { 00:10:53.059 "name": "BaseBdev3", 00:10:53.059 "uuid": "c9c6c2dd-2c82-4c0d-aad4-6583b2c1f033", 00:10:53.059 "is_configured": true, 00:10:53.059 "data_offset": 2048, 00:10:53.059 "data_size": 63488 00:10:53.059 } 00:10:53.059 ] 00:10:53.059 }' 00:10:53.059 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.059 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.627 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.627 [2024-11-15 11:22:36.525744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.886 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.886 [2024-11-15 11:22:36.663356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.886 [2024-11-15 11:22:36.663502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.886 [2024-11-15 11:22:36.744910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.886 [2024-11-15 11:22:36.744983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.887 [2024-11-15 11:22:36.745004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.887 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.147 BaseBdev2 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.147 [ 00:10:54.147 { 00:10:54.147 "name": "BaseBdev2", 00:10:54.147 "aliases": [ 00:10:54.147 "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24" 00:10:54.147 ], 00:10:54.147 "product_name": "Malloc disk", 00:10:54.147 "block_size": 512, 00:10:54.147 "num_blocks": 65536, 00:10:54.147 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:54.147 "assigned_rate_limits": { 00:10:54.147 "rw_ios_per_sec": 0, 00:10:54.147 "rw_mbytes_per_sec": 0, 00:10:54.147 "r_mbytes_per_sec": 0, 00:10:54.147 "w_mbytes_per_sec": 0 00:10:54.147 }, 00:10:54.147 "claimed": false, 00:10:54.147 "zoned": false, 00:10:54.147 "supported_io_types": { 00:10:54.147 "read": true, 00:10:54.147 "write": true, 00:10:54.147 "unmap": true, 00:10:54.147 "flush": true, 00:10:54.147 "reset": true, 00:10:54.147 "nvme_admin": false, 00:10:54.147 "nvme_io": false, 00:10:54.147 "nvme_io_md": false, 00:10:54.147 "write_zeroes": true, 00:10:54.147 "zcopy": true, 00:10:54.147 "get_zone_info": false, 00:10:54.147 "zone_management": false, 00:10:54.147 "zone_append": false, 00:10:54.147 "compare": false, 00:10:54.147 "compare_and_write": false, 00:10:54.147 "abort": true, 00:10:54.147 "seek_hole": false, 00:10:54.147 "seek_data": false, 00:10:54.147 "copy": true, 00:10:54.147 "nvme_iov_md": false 00:10:54.147 }, 00:10:54.147 "memory_domains": [ 00:10:54.147 { 00:10:54.147 "dma_device_id": "system", 00:10:54.147 "dma_device_type": 1 00:10:54.147 }, 00:10:54.147 { 00:10:54.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.147 "dma_device_type": 2 00:10:54.147 } 00:10:54.147 ], 00:10:54.147 "driver_specific": {} 00:10:54.147 } 00:10:54.147 ] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.147 BaseBdev3 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.147 [ 00:10:54.147 { 00:10:54.147 "name": "BaseBdev3", 00:10:54.147 "aliases": [ 00:10:54.147 "417a32e1-fb21-4eb1-8112-3199caf4d33c" 00:10:54.147 ], 00:10:54.147 "product_name": "Malloc disk", 00:10:54.147 "block_size": 512, 00:10:54.147 "num_blocks": 65536, 00:10:54.147 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:54.147 "assigned_rate_limits": { 00:10:54.147 "rw_ios_per_sec": 0, 00:10:54.147 "rw_mbytes_per_sec": 0, 00:10:54.147 "r_mbytes_per_sec": 0, 00:10:54.147 "w_mbytes_per_sec": 0 00:10:54.147 }, 00:10:54.147 "claimed": false, 00:10:54.147 "zoned": false, 00:10:54.147 "supported_io_types": { 00:10:54.147 "read": true, 00:10:54.147 "write": true, 00:10:54.147 "unmap": true, 00:10:54.147 "flush": true, 00:10:54.147 "reset": true, 00:10:54.147 "nvme_admin": false, 00:10:54.147 "nvme_io": false, 00:10:54.147 "nvme_io_md": false, 00:10:54.147 "write_zeroes": true, 00:10:54.147 "zcopy": true, 00:10:54.147 "get_zone_info": false, 00:10:54.147 "zone_management": false, 00:10:54.147 "zone_append": false, 00:10:54.147 "compare": false, 00:10:54.147 "compare_and_write": false, 00:10:54.147 "abort": true, 00:10:54.147 "seek_hole": false, 00:10:54.147 "seek_data": false, 00:10:54.147 "copy": true, 00:10:54.147 "nvme_iov_md": false 00:10:54.147 }, 00:10:54.147 "memory_domains": [ 00:10:54.147 { 00:10:54.147 "dma_device_id": "system", 00:10:54.147 "dma_device_type": 1 00:10:54.147 }, 00:10:54.147 { 00:10:54.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.147 "dma_device_type": 2 00:10:54.147 } 00:10:54.147 ], 00:10:54.147 "driver_specific": {} 00:10:54.147 } 00:10:54.147 ] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.147 [2024-11-15 11:22:36.963422] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.147 [2024-11-15 11:22:36.963626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.147 [2024-11-15 11:22:36.963793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.147 [2024-11-15 11:22:36.966404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:54.147 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.148 11:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.148 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.148 "name": "Existed_Raid", 00:10:54.148 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:54.148 "strip_size_kb": 0, 00:10:54.148 "state": "configuring", 00:10:54.148 "raid_level": "raid1", 00:10:54.148 "superblock": true, 00:10:54.148 "num_base_bdevs": 3, 00:10:54.148 "num_base_bdevs_discovered": 2, 00:10:54.148 "num_base_bdevs_operational": 3, 00:10:54.148 "base_bdevs_list": [ 00:10:54.148 { 00:10:54.148 "name": "BaseBdev1", 00:10:54.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.148 "is_configured": false, 00:10:54.148 "data_offset": 0, 00:10:54.148 "data_size": 0 00:10:54.148 }, 00:10:54.148 { 00:10:54.148 "name": "BaseBdev2", 00:10:54.148 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:54.148 "is_configured": true, 00:10:54.148 "data_offset": 2048, 00:10:54.148 "data_size": 63488 00:10:54.148 }, 00:10:54.148 { 00:10:54.148 "name": "BaseBdev3", 00:10:54.148 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:54.148 "is_configured": true, 00:10:54.148 "data_offset": 2048, 00:10:54.148 "data_size": 63488 00:10:54.148 } 00:10:54.148 ] 00:10:54.148 }' 00:10:54.148 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.148 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.715 [2024-11-15 11:22:37.495637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.715 "name": "Existed_Raid", 00:10:54.715 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:54.715 "strip_size_kb": 0, 00:10:54.715 "state": "configuring", 00:10:54.715 "raid_level": "raid1", 00:10:54.715 "superblock": true, 00:10:54.715 "num_base_bdevs": 3, 00:10:54.715 "num_base_bdevs_discovered": 1, 00:10:54.715 "num_base_bdevs_operational": 3, 00:10:54.715 "base_bdevs_list": [ 00:10:54.715 { 00:10:54.715 "name": "BaseBdev1", 00:10:54.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.715 "is_configured": false, 00:10:54.715 "data_offset": 0, 00:10:54.715 "data_size": 0 00:10:54.715 }, 00:10:54.715 { 00:10:54.715 "name": null, 00:10:54.715 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:54.715 "is_configured": false, 00:10:54.715 "data_offset": 0, 00:10:54.715 "data_size": 63488 00:10:54.715 }, 00:10:54.715 { 00:10:54.715 "name": "BaseBdev3", 00:10:54.715 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:54.715 "is_configured": true, 00:10:54.715 "data_offset": 2048, 00:10:54.715 "data_size": 63488 00:10:54.715 } 00:10:54.715 ] 00:10:54.715 }' 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.715 11:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.284 [2024-11-15 11:22:38.123280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.284 BaseBdev1 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.284 [ 00:10:55.284 { 00:10:55.284 "name": "BaseBdev1", 00:10:55.284 "aliases": [ 00:10:55.284 "23c3bb65-3b25-4fad-a994-f716b60cafcc" 00:10:55.284 ], 00:10:55.284 "product_name": "Malloc disk", 00:10:55.284 "block_size": 512, 00:10:55.284 "num_blocks": 65536, 00:10:55.284 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:55.284 "assigned_rate_limits": { 00:10:55.284 "rw_ios_per_sec": 0, 00:10:55.284 "rw_mbytes_per_sec": 0, 00:10:55.284 "r_mbytes_per_sec": 0, 00:10:55.284 "w_mbytes_per_sec": 0 00:10:55.284 }, 00:10:55.284 "claimed": true, 00:10:55.284 "claim_type": "exclusive_write", 00:10:55.284 "zoned": false, 00:10:55.284 "supported_io_types": { 00:10:55.284 "read": true, 00:10:55.284 "write": true, 00:10:55.284 "unmap": true, 00:10:55.284 "flush": true, 00:10:55.284 "reset": true, 00:10:55.284 "nvme_admin": false, 00:10:55.284 "nvme_io": false, 00:10:55.284 "nvme_io_md": false, 00:10:55.284 "write_zeroes": true, 00:10:55.284 "zcopy": true, 00:10:55.284 "get_zone_info": false, 00:10:55.284 "zone_management": false, 00:10:55.284 "zone_append": false, 00:10:55.284 "compare": false, 00:10:55.284 "compare_and_write": false, 00:10:55.284 "abort": true, 00:10:55.284 "seek_hole": false, 00:10:55.284 "seek_data": false, 00:10:55.284 "copy": true, 00:10:55.284 "nvme_iov_md": false 00:10:55.284 }, 00:10:55.284 "memory_domains": [ 00:10:55.284 { 00:10:55.284 "dma_device_id": "system", 00:10:55.284 "dma_device_type": 1 00:10:55.284 }, 00:10:55.284 { 00:10:55.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.284 "dma_device_type": 2 00:10:55.284 } 00:10:55.284 ], 00:10:55.284 "driver_specific": {} 00:10:55.284 } 00:10:55.284 ] 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.284 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.284 "name": "Existed_Raid", 00:10:55.284 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:55.285 "strip_size_kb": 0, 00:10:55.285 "state": "configuring", 00:10:55.285 "raid_level": "raid1", 00:10:55.285 "superblock": true, 00:10:55.285 "num_base_bdevs": 3, 00:10:55.285 "num_base_bdevs_discovered": 2, 00:10:55.285 "num_base_bdevs_operational": 3, 00:10:55.285 "base_bdevs_list": [ 00:10:55.285 { 00:10:55.285 "name": "BaseBdev1", 00:10:55.285 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:55.285 "is_configured": true, 00:10:55.285 "data_offset": 2048, 00:10:55.285 "data_size": 63488 00:10:55.285 }, 00:10:55.285 { 00:10:55.285 "name": null, 00:10:55.285 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:55.285 "is_configured": false, 00:10:55.285 "data_offset": 0, 00:10:55.285 "data_size": 63488 00:10:55.285 }, 00:10:55.285 { 00:10:55.285 "name": "BaseBdev3", 00:10:55.285 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:55.285 "is_configured": true, 00:10:55.285 "data_offset": 2048, 00:10:55.285 "data_size": 63488 00:10:55.285 } 00:10:55.285 ] 00:10:55.285 }' 00:10:55.285 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.285 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.852 [2024-11-15 11:22:38.739516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.852 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.111 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.111 "name": "Existed_Raid", 00:10:56.111 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:56.111 "strip_size_kb": 0, 00:10:56.111 "state": "configuring", 00:10:56.111 "raid_level": "raid1", 00:10:56.111 "superblock": true, 00:10:56.111 "num_base_bdevs": 3, 00:10:56.111 "num_base_bdevs_discovered": 1, 00:10:56.111 "num_base_bdevs_operational": 3, 00:10:56.111 "base_bdevs_list": [ 00:10:56.111 { 00:10:56.111 "name": "BaseBdev1", 00:10:56.111 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:56.111 "is_configured": true, 00:10:56.111 "data_offset": 2048, 00:10:56.111 "data_size": 63488 00:10:56.111 }, 00:10:56.111 { 00:10:56.111 "name": null, 00:10:56.111 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:56.111 "is_configured": false, 00:10:56.111 "data_offset": 0, 00:10:56.111 "data_size": 63488 00:10:56.111 }, 00:10:56.111 { 00:10:56.111 "name": null, 00:10:56.111 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:56.111 "is_configured": false, 00:10:56.111 "data_offset": 0, 00:10:56.111 "data_size": 63488 00:10:56.111 } 00:10:56.111 ] 00:10:56.111 }' 00:10:56.111 11:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.111 11:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.370 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.370 [2024-11-15 11:22:39.311819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.629 "name": "Existed_Raid", 00:10:56.629 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:56.629 "strip_size_kb": 0, 00:10:56.629 "state": "configuring", 00:10:56.629 "raid_level": "raid1", 00:10:56.629 "superblock": true, 00:10:56.629 "num_base_bdevs": 3, 00:10:56.629 "num_base_bdevs_discovered": 2, 00:10:56.629 "num_base_bdevs_operational": 3, 00:10:56.629 "base_bdevs_list": [ 00:10:56.629 { 00:10:56.629 "name": "BaseBdev1", 00:10:56.629 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:56.629 "is_configured": true, 00:10:56.629 "data_offset": 2048, 00:10:56.629 "data_size": 63488 00:10:56.629 }, 00:10:56.629 { 00:10:56.629 "name": null, 00:10:56.629 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:56.629 "is_configured": false, 00:10:56.629 "data_offset": 0, 00:10:56.629 "data_size": 63488 00:10:56.629 }, 00:10:56.629 { 00:10:56.629 "name": "BaseBdev3", 00:10:56.629 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:56.629 "is_configured": true, 00:10:56.629 "data_offset": 2048, 00:10:56.629 "data_size": 63488 00:10:56.629 } 00:10:56.629 ] 00:10:56.629 }' 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.629 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.888 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.888 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.888 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:56.888 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.147 [2024-11-15 11:22:39.879963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.147 11:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.147 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.147 "name": "Existed_Raid", 00:10:57.147 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:57.147 "strip_size_kb": 0, 00:10:57.147 "state": "configuring", 00:10:57.147 "raid_level": "raid1", 00:10:57.147 "superblock": true, 00:10:57.147 "num_base_bdevs": 3, 00:10:57.147 "num_base_bdevs_discovered": 1, 00:10:57.147 "num_base_bdevs_operational": 3, 00:10:57.147 "base_bdevs_list": [ 00:10:57.147 { 00:10:57.147 "name": null, 00:10:57.147 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:57.147 "is_configured": false, 00:10:57.147 "data_offset": 0, 00:10:57.147 "data_size": 63488 00:10:57.147 }, 00:10:57.147 { 00:10:57.147 "name": null, 00:10:57.147 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:57.147 "is_configured": false, 00:10:57.147 "data_offset": 0, 00:10:57.147 "data_size": 63488 00:10:57.147 }, 00:10:57.147 { 00:10:57.147 "name": "BaseBdev3", 00:10:57.147 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:57.147 "is_configured": true, 00:10:57.147 "data_offset": 2048, 00:10:57.147 "data_size": 63488 00:10:57.147 } 00:10:57.147 ] 00:10:57.147 }' 00:10:57.147 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.147 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.714 [2024-11-15 11:22:40.532022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.714 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.715 "name": "Existed_Raid", 00:10:57.715 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:57.715 "strip_size_kb": 0, 00:10:57.715 "state": "configuring", 00:10:57.715 "raid_level": "raid1", 00:10:57.715 "superblock": true, 00:10:57.715 "num_base_bdevs": 3, 00:10:57.715 "num_base_bdevs_discovered": 2, 00:10:57.715 "num_base_bdevs_operational": 3, 00:10:57.715 "base_bdevs_list": [ 00:10:57.715 { 00:10:57.715 "name": null, 00:10:57.715 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:57.715 "is_configured": false, 00:10:57.715 "data_offset": 0, 00:10:57.715 "data_size": 63488 00:10:57.715 }, 00:10:57.715 { 00:10:57.715 "name": "BaseBdev2", 00:10:57.715 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:57.715 "is_configured": true, 00:10:57.715 "data_offset": 2048, 00:10:57.715 "data_size": 63488 00:10:57.715 }, 00:10:57.715 { 00:10:57.715 "name": "BaseBdev3", 00:10:57.715 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:57.715 "is_configured": true, 00:10:57.715 "data_offset": 2048, 00:10:57.715 "data_size": 63488 00:10:57.715 } 00:10:57.715 ] 00:10:57.715 }' 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.715 11:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 23c3bb65-3b25-4fad-a994-f716b60cafcc 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.282 [2024-11-15 11:22:41.203889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:58.282 [2024-11-15 11:22:41.204426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:58.282 [2024-11-15 11:22:41.204452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:58.282 NewBaseBdev 00:10:58.282 [2024-11-15 11:22:41.204795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:58.282 [2024-11-15 11:22:41.205008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:58.282 [2024-11-15 11:22:41.205038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:58.282 [2024-11-15 11:22:41.205207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.282 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.283 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:58.283 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.283 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.541 [ 00:10:58.541 { 00:10:58.541 "name": "NewBaseBdev", 00:10:58.541 "aliases": [ 00:10:58.541 "23c3bb65-3b25-4fad-a994-f716b60cafcc" 00:10:58.541 ], 00:10:58.541 "product_name": "Malloc disk", 00:10:58.541 "block_size": 512, 00:10:58.541 "num_blocks": 65536, 00:10:58.541 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:58.541 "assigned_rate_limits": { 00:10:58.541 "rw_ios_per_sec": 0, 00:10:58.541 "rw_mbytes_per_sec": 0, 00:10:58.541 "r_mbytes_per_sec": 0, 00:10:58.541 "w_mbytes_per_sec": 0 00:10:58.541 }, 00:10:58.541 "claimed": true, 00:10:58.541 "claim_type": "exclusive_write", 00:10:58.541 "zoned": false, 00:10:58.541 "supported_io_types": { 00:10:58.541 "read": true, 00:10:58.541 "write": true, 00:10:58.541 "unmap": true, 00:10:58.541 "flush": true, 00:10:58.541 "reset": true, 00:10:58.541 "nvme_admin": false, 00:10:58.541 "nvme_io": false, 00:10:58.541 "nvme_io_md": false, 00:10:58.541 "write_zeroes": true, 00:10:58.541 "zcopy": true, 00:10:58.541 "get_zone_info": false, 00:10:58.541 "zone_management": false, 00:10:58.541 "zone_append": false, 00:10:58.541 "compare": false, 00:10:58.541 "compare_and_write": false, 00:10:58.541 "abort": true, 00:10:58.541 "seek_hole": false, 00:10:58.541 "seek_data": false, 00:10:58.541 "copy": true, 00:10:58.541 "nvme_iov_md": false 00:10:58.541 }, 00:10:58.541 "memory_domains": [ 00:10:58.541 { 00:10:58.541 "dma_device_id": "system", 00:10:58.541 "dma_device_type": 1 00:10:58.541 }, 00:10:58.541 { 00:10:58.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.541 "dma_device_type": 2 00:10:58.541 } 00:10:58.541 ], 00:10:58.541 "driver_specific": {} 00:10:58.541 } 00:10:58.541 ] 00:10:58.541 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.541 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:58.541 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:58.541 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.541 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.541 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.542 "name": "Existed_Raid", 00:10:58.542 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:58.542 "strip_size_kb": 0, 00:10:58.542 "state": "online", 00:10:58.542 "raid_level": "raid1", 00:10:58.542 "superblock": true, 00:10:58.542 "num_base_bdevs": 3, 00:10:58.542 "num_base_bdevs_discovered": 3, 00:10:58.542 "num_base_bdevs_operational": 3, 00:10:58.542 "base_bdevs_list": [ 00:10:58.542 { 00:10:58.542 "name": "NewBaseBdev", 00:10:58.542 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:58.542 "is_configured": true, 00:10:58.542 "data_offset": 2048, 00:10:58.542 "data_size": 63488 00:10:58.542 }, 00:10:58.542 { 00:10:58.542 "name": "BaseBdev2", 00:10:58.542 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:58.542 "is_configured": true, 00:10:58.542 "data_offset": 2048, 00:10:58.542 "data_size": 63488 00:10:58.542 }, 00:10:58.542 { 00:10:58.542 "name": "BaseBdev3", 00:10:58.542 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:58.542 "is_configured": true, 00:10:58.542 "data_offset": 2048, 00:10:58.542 "data_size": 63488 00:10:58.542 } 00:10:58.542 ] 00:10:58.542 }' 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.542 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.110 [2024-11-15 11:22:41.764618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.110 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.110 "name": "Existed_Raid", 00:10:59.110 "aliases": [ 00:10:59.110 "78217aaa-297d-4f7b-ad2c-4f4f57d01e69" 00:10:59.110 ], 00:10:59.110 "product_name": "Raid Volume", 00:10:59.110 "block_size": 512, 00:10:59.110 "num_blocks": 63488, 00:10:59.110 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:59.110 "assigned_rate_limits": { 00:10:59.110 "rw_ios_per_sec": 0, 00:10:59.110 "rw_mbytes_per_sec": 0, 00:10:59.110 "r_mbytes_per_sec": 0, 00:10:59.110 "w_mbytes_per_sec": 0 00:10:59.110 }, 00:10:59.110 "claimed": false, 00:10:59.110 "zoned": false, 00:10:59.110 "supported_io_types": { 00:10:59.110 "read": true, 00:10:59.110 "write": true, 00:10:59.110 "unmap": false, 00:10:59.110 "flush": false, 00:10:59.110 "reset": true, 00:10:59.110 "nvme_admin": false, 00:10:59.110 "nvme_io": false, 00:10:59.110 "nvme_io_md": false, 00:10:59.110 "write_zeroes": true, 00:10:59.110 "zcopy": false, 00:10:59.110 "get_zone_info": false, 00:10:59.110 "zone_management": false, 00:10:59.110 "zone_append": false, 00:10:59.110 "compare": false, 00:10:59.110 "compare_and_write": false, 00:10:59.110 "abort": false, 00:10:59.110 "seek_hole": false, 00:10:59.110 "seek_data": false, 00:10:59.110 "copy": false, 00:10:59.110 "nvme_iov_md": false 00:10:59.110 }, 00:10:59.110 "memory_domains": [ 00:10:59.110 { 00:10:59.110 "dma_device_id": "system", 00:10:59.110 "dma_device_type": 1 00:10:59.110 }, 00:10:59.110 { 00:10:59.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.110 "dma_device_type": 2 00:10:59.110 }, 00:10:59.110 { 00:10:59.110 "dma_device_id": "system", 00:10:59.110 "dma_device_type": 1 00:10:59.110 }, 00:10:59.110 { 00:10:59.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.110 "dma_device_type": 2 00:10:59.110 }, 00:10:59.110 { 00:10:59.110 "dma_device_id": "system", 00:10:59.110 "dma_device_type": 1 00:10:59.110 }, 00:10:59.110 { 00:10:59.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.110 "dma_device_type": 2 00:10:59.110 } 00:10:59.110 ], 00:10:59.110 "driver_specific": { 00:10:59.110 "raid": { 00:10:59.110 "uuid": "78217aaa-297d-4f7b-ad2c-4f4f57d01e69", 00:10:59.110 "strip_size_kb": 0, 00:10:59.110 "state": "online", 00:10:59.110 "raid_level": "raid1", 00:10:59.110 "superblock": true, 00:10:59.110 "num_base_bdevs": 3, 00:10:59.110 "num_base_bdevs_discovered": 3, 00:10:59.110 "num_base_bdevs_operational": 3, 00:10:59.110 "base_bdevs_list": [ 00:10:59.110 { 00:10:59.110 "name": "NewBaseBdev", 00:10:59.110 "uuid": "23c3bb65-3b25-4fad-a994-f716b60cafcc", 00:10:59.111 "is_configured": true, 00:10:59.111 "data_offset": 2048, 00:10:59.111 "data_size": 63488 00:10:59.111 }, 00:10:59.111 { 00:10:59.111 "name": "BaseBdev2", 00:10:59.111 "uuid": "7b25fdfb-7d06-4a39-b327-1c5fd8fedf24", 00:10:59.111 "is_configured": true, 00:10:59.111 "data_offset": 2048, 00:10:59.111 "data_size": 63488 00:10:59.111 }, 00:10:59.111 { 00:10:59.111 "name": "BaseBdev3", 00:10:59.111 "uuid": "417a32e1-fb21-4eb1-8112-3199caf4d33c", 00:10:59.111 "is_configured": true, 00:10:59.111 "data_offset": 2048, 00:10:59.111 "data_size": 63488 00:10:59.111 } 00:10:59.111 ] 00:10:59.111 } 00:10:59.111 } 00:10:59.111 }' 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:59.111 BaseBdev2 00:10:59.111 BaseBdev3' 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.111 11:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.111 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.111 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.111 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.111 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.111 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.111 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.111 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.111 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.370 [2024-11-15 11:22:42.076239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.370 [2024-11-15 11:22:42.076277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.370 [2024-11-15 11:22:42.076363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.370 [2024-11-15 11:22:42.076772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.370 [2024-11-15 11:22:42.076788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67951 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 67951 ']' 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 67951 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67951 00:10:59.370 killing process with pid 67951 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67951' 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 67951 00:10:59.370 [2024-11-15 11:22:42.118025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.370 11:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 67951 00:10:59.629 [2024-11-15 11:22:42.391879] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.566 11:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:00.566 00:11:00.566 real 0m11.889s 00:11:00.566 user 0m19.633s 00:11:00.566 sys 0m1.672s 00:11:00.566 ************************************ 00:11:00.566 END TEST raid_state_function_test_sb 00:11:00.566 ************************************ 00:11:00.566 11:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:00.566 11:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 11:22:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:00.825 11:22:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:00.825 11:22:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.825 11:22:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 ************************************ 00:11:00.825 START TEST raid_superblock_test 00:11:00.825 ************************************ 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:00.825 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68589 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68589 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68589 ']' 00:11:00.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:00.826 11:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.826 [2024-11-15 11:22:43.693031] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:00.826 [2024-11-15 11:22:43.693603] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68589 ] 00:11:01.084 [2024-11-15 11:22:43.871311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.084 [2024-11-15 11:22:44.019158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.343 [2024-11-15 11:22:44.240494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.343 [2024-11-15 11:22:44.240567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.914 malloc1 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.914 [2024-11-15 11:22:44.679317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:01.914 [2024-11-15 11:22:44.679552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.914 [2024-11-15 11:22:44.679601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:01.914 [2024-11-15 11:22:44.679619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.914 [2024-11-15 11:22:44.682840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.914 [2024-11-15 11:22:44.683040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:01.914 pt1 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.914 malloc2 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.914 [2024-11-15 11:22:44.743005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.914 [2024-11-15 11:22:44.743250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.914 [2024-11-15 11:22:44.743303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:01.914 [2024-11-15 11:22:44.743321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.914 [2024-11-15 11:22:44.746352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.914 [2024-11-15 11:22:44.746575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.914 pt2 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.914 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.915 malloc3 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.915 [2024-11-15 11:22:44.810932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:01.915 [2024-11-15 11:22:44.811010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.915 [2024-11-15 11:22:44.811046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:01.915 [2024-11-15 11:22:44.811062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.915 [2024-11-15 11:22:44.814108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.915 [2024-11-15 11:22:44.814309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:01.915 pt3 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.915 [2024-11-15 11:22:44.823153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:01.915 [2024-11-15 11:22:44.825822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.915 [2024-11-15 11:22:44.825921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:01.915 [2024-11-15 11:22:44.826208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:01.915 [2024-11-15 11:22:44.826240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.915 [2024-11-15 11:22:44.826546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:01.915 [2024-11-15 11:22:44.826808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:01.915 [2024-11-15 11:22:44.826829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:01.915 [2024-11-15 11:22:44.827091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.915 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.172 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.172 "name": "raid_bdev1", 00:11:02.172 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:02.172 "strip_size_kb": 0, 00:11:02.172 "state": "online", 00:11:02.172 "raid_level": "raid1", 00:11:02.172 "superblock": true, 00:11:02.172 "num_base_bdevs": 3, 00:11:02.172 "num_base_bdevs_discovered": 3, 00:11:02.172 "num_base_bdevs_operational": 3, 00:11:02.172 "base_bdevs_list": [ 00:11:02.172 { 00:11:02.172 "name": "pt1", 00:11:02.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.172 "is_configured": true, 00:11:02.172 "data_offset": 2048, 00:11:02.172 "data_size": 63488 00:11:02.172 }, 00:11:02.172 { 00:11:02.172 "name": "pt2", 00:11:02.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.173 "is_configured": true, 00:11:02.173 "data_offset": 2048, 00:11:02.173 "data_size": 63488 00:11:02.173 }, 00:11:02.173 { 00:11:02.173 "name": "pt3", 00:11:02.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.173 "is_configured": true, 00:11:02.173 "data_offset": 2048, 00:11:02.173 "data_size": 63488 00:11:02.173 } 00:11:02.173 ] 00:11:02.173 }' 00:11:02.173 11:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.173 11:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.431 [2024-11-15 11:22:45.355776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.431 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.689 "name": "raid_bdev1", 00:11:02.689 "aliases": [ 00:11:02.689 "d1838b13-2ba7-4bed-8f8f-d3da5905631b" 00:11:02.689 ], 00:11:02.689 "product_name": "Raid Volume", 00:11:02.689 "block_size": 512, 00:11:02.689 "num_blocks": 63488, 00:11:02.689 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:02.689 "assigned_rate_limits": { 00:11:02.689 "rw_ios_per_sec": 0, 00:11:02.689 "rw_mbytes_per_sec": 0, 00:11:02.689 "r_mbytes_per_sec": 0, 00:11:02.689 "w_mbytes_per_sec": 0 00:11:02.689 }, 00:11:02.689 "claimed": false, 00:11:02.689 "zoned": false, 00:11:02.689 "supported_io_types": { 00:11:02.689 "read": true, 00:11:02.689 "write": true, 00:11:02.689 "unmap": false, 00:11:02.689 "flush": false, 00:11:02.689 "reset": true, 00:11:02.689 "nvme_admin": false, 00:11:02.689 "nvme_io": false, 00:11:02.689 "nvme_io_md": false, 00:11:02.689 "write_zeroes": true, 00:11:02.689 "zcopy": false, 00:11:02.689 "get_zone_info": false, 00:11:02.689 "zone_management": false, 00:11:02.689 "zone_append": false, 00:11:02.689 "compare": false, 00:11:02.689 "compare_and_write": false, 00:11:02.689 "abort": false, 00:11:02.689 "seek_hole": false, 00:11:02.689 "seek_data": false, 00:11:02.689 "copy": false, 00:11:02.689 "nvme_iov_md": false 00:11:02.689 }, 00:11:02.689 "memory_domains": [ 00:11:02.689 { 00:11:02.689 "dma_device_id": "system", 00:11:02.689 "dma_device_type": 1 00:11:02.689 }, 00:11:02.689 { 00:11:02.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.689 "dma_device_type": 2 00:11:02.689 }, 00:11:02.689 { 00:11:02.689 "dma_device_id": "system", 00:11:02.689 "dma_device_type": 1 00:11:02.689 }, 00:11:02.689 { 00:11:02.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.689 "dma_device_type": 2 00:11:02.689 }, 00:11:02.689 { 00:11:02.689 "dma_device_id": "system", 00:11:02.689 "dma_device_type": 1 00:11:02.689 }, 00:11:02.689 { 00:11:02.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.689 "dma_device_type": 2 00:11:02.689 } 00:11:02.689 ], 00:11:02.689 "driver_specific": { 00:11:02.689 "raid": { 00:11:02.689 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:02.689 "strip_size_kb": 0, 00:11:02.689 "state": "online", 00:11:02.689 "raid_level": "raid1", 00:11:02.689 "superblock": true, 00:11:02.689 "num_base_bdevs": 3, 00:11:02.689 "num_base_bdevs_discovered": 3, 00:11:02.689 "num_base_bdevs_operational": 3, 00:11:02.689 "base_bdevs_list": [ 00:11:02.689 { 00:11:02.689 "name": "pt1", 00:11:02.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.689 "is_configured": true, 00:11:02.689 "data_offset": 2048, 00:11:02.689 "data_size": 63488 00:11:02.689 }, 00:11:02.689 { 00:11:02.689 "name": "pt2", 00:11:02.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.689 "is_configured": true, 00:11:02.689 "data_offset": 2048, 00:11:02.689 "data_size": 63488 00:11:02.689 }, 00:11:02.689 { 00:11:02.689 "name": "pt3", 00:11:02.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.689 "is_configured": true, 00:11:02.689 "data_offset": 2048, 00:11:02.689 "data_size": 63488 00:11:02.689 } 00:11:02.689 ] 00:11:02.689 } 00:11:02.689 } 00:11:02.689 }' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:02.689 pt2 00:11:02.689 pt3' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:02.689 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.690 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.690 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.690 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.948 [2024-11-15 11:22:45.671738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d1838b13-2ba7-4bed-8f8f-d3da5905631b 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d1838b13-2ba7-4bed-8f8f-d3da5905631b ']' 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.948 [2024-11-15 11:22:45.719430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.948 [2024-11-15 11:22:45.719463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.948 [2024-11-15 11:22:45.719603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.948 [2024-11-15 11:22:45.719698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.948 [2024-11-15 11:22:45.719714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.948 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.949 [2024-11-15 11:22:45.863545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:02.949 [2024-11-15 11:22:45.866281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:02.949 [2024-11-15 11:22:45.866365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:02.949 [2024-11-15 11:22:45.866441] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:02.949 [2024-11-15 11:22:45.866514] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:02.949 [2024-11-15 11:22:45.866550] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:02.949 [2024-11-15 11:22:45.866577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.949 [2024-11-15 11:22:45.866591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:02.949 request: 00:11:02.949 { 00:11:02.949 "name": "raid_bdev1", 00:11:02.949 "raid_level": "raid1", 00:11:02.949 "base_bdevs": [ 00:11:02.949 "malloc1", 00:11:02.949 "malloc2", 00:11:02.949 "malloc3" 00:11:02.949 ], 00:11:02.949 "superblock": false, 00:11:02.949 "method": "bdev_raid_create", 00:11:02.949 "req_id": 1 00:11:02.949 } 00:11:02.949 Got JSON-RPC error response 00:11:02.949 response: 00:11:02.949 { 00:11:02.949 "code": -17, 00:11:02.949 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:02.949 } 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.949 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.207 [2024-11-15 11:22:45.931495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:03.207 [2024-11-15 11:22:45.931610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.207 [2024-11-15 11:22:45.931653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:03.207 [2024-11-15 11:22:45.931666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.207 [2024-11-15 11:22:45.934914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.207 [2024-11-15 11:22:45.935124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:03.207 [2024-11-15 11:22:45.935276] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:03.207 [2024-11-15 11:22:45.935345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:03.207 pt1 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.207 "name": "raid_bdev1", 00:11:03.207 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:03.207 "strip_size_kb": 0, 00:11:03.207 "state": "configuring", 00:11:03.207 "raid_level": "raid1", 00:11:03.207 "superblock": true, 00:11:03.207 "num_base_bdevs": 3, 00:11:03.207 "num_base_bdevs_discovered": 1, 00:11:03.207 "num_base_bdevs_operational": 3, 00:11:03.207 "base_bdevs_list": [ 00:11:03.207 { 00:11:03.207 "name": "pt1", 00:11:03.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.207 "is_configured": true, 00:11:03.207 "data_offset": 2048, 00:11:03.207 "data_size": 63488 00:11:03.207 }, 00:11:03.207 { 00:11:03.207 "name": null, 00:11:03.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.207 "is_configured": false, 00:11:03.207 "data_offset": 2048, 00:11:03.207 "data_size": 63488 00:11:03.207 }, 00:11:03.207 { 00:11:03.207 "name": null, 00:11:03.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.207 "is_configured": false, 00:11:03.207 "data_offset": 2048, 00:11:03.207 "data_size": 63488 00:11:03.207 } 00:11:03.207 ] 00:11:03.207 }' 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.207 11:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.774 [2024-11-15 11:22:46.455776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:03.774 [2024-11-15 11:22:46.455877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.774 [2024-11-15 11:22:46.455913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:03.774 [2024-11-15 11:22:46.455929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.774 [2024-11-15 11:22:46.456611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.774 [2024-11-15 11:22:46.456645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:03.774 [2024-11-15 11:22:46.456768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:03.774 [2024-11-15 11:22:46.456825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:03.774 pt2 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.774 [2024-11-15 11:22:46.463746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.774 "name": "raid_bdev1", 00:11:03.774 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:03.774 "strip_size_kb": 0, 00:11:03.774 "state": "configuring", 00:11:03.774 "raid_level": "raid1", 00:11:03.774 "superblock": true, 00:11:03.774 "num_base_bdevs": 3, 00:11:03.774 "num_base_bdevs_discovered": 1, 00:11:03.774 "num_base_bdevs_operational": 3, 00:11:03.774 "base_bdevs_list": [ 00:11:03.774 { 00:11:03.774 "name": "pt1", 00:11:03.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.774 "is_configured": true, 00:11:03.774 "data_offset": 2048, 00:11:03.774 "data_size": 63488 00:11:03.774 }, 00:11:03.774 { 00:11:03.774 "name": null, 00:11:03.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.774 "is_configured": false, 00:11:03.774 "data_offset": 0, 00:11:03.774 "data_size": 63488 00:11:03.774 }, 00:11:03.774 { 00:11:03.774 "name": null, 00:11:03.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.774 "is_configured": false, 00:11:03.774 "data_offset": 2048, 00:11:03.774 "data_size": 63488 00:11:03.774 } 00:11:03.774 ] 00:11:03.774 }' 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.774 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.340 [2024-11-15 11:22:46.987951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.340 [2024-11-15 11:22:46.988063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.340 [2024-11-15 11:22:46.988111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:04.340 [2024-11-15 11:22:46.988130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.340 [2024-11-15 11:22:46.988806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.340 [2024-11-15 11:22:46.988869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.340 [2024-11-15 11:22:46.988997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.340 [2024-11-15 11:22:46.989061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.340 pt2 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.340 [2024-11-15 11:22:46.995899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:04.340 [2024-11-15 11:22:46.995985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.340 [2024-11-15 11:22:46.996006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:04.340 [2024-11-15 11:22:46.996022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.340 [2024-11-15 11:22:46.996561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.340 [2024-11-15 11:22:46.996616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:04.340 [2024-11-15 11:22:46.996695] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:04.340 [2024-11-15 11:22:46.996729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:04.340 [2024-11-15 11:22:46.996893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.340 [2024-11-15 11:22:46.996918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.340 [2024-11-15 11:22:46.997253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:04.340 [2024-11-15 11:22:46.997477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.340 [2024-11-15 11:22:46.997493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:04.340 [2024-11-15 11:22:46.997677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.340 pt3 00:11:04.340 11:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.340 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.341 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.341 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.341 "name": "raid_bdev1", 00:11:04.341 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:04.341 "strip_size_kb": 0, 00:11:04.341 "state": "online", 00:11:04.341 "raid_level": "raid1", 00:11:04.341 "superblock": true, 00:11:04.341 "num_base_bdevs": 3, 00:11:04.341 "num_base_bdevs_discovered": 3, 00:11:04.341 "num_base_bdevs_operational": 3, 00:11:04.341 "base_bdevs_list": [ 00:11:04.341 { 00:11:04.341 "name": "pt1", 00:11:04.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.341 "is_configured": true, 00:11:04.341 "data_offset": 2048, 00:11:04.341 "data_size": 63488 00:11:04.341 }, 00:11:04.341 { 00:11:04.341 "name": "pt2", 00:11:04.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.341 "is_configured": true, 00:11:04.341 "data_offset": 2048, 00:11:04.341 "data_size": 63488 00:11:04.341 }, 00:11:04.341 { 00:11:04.341 "name": "pt3", 00:11:04.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.341 "is_configured": true, 00:11:04.341 "data_offset": 2048, 00:11:04.341 "data_size": 63488 00:11:04.341 } 00:11:04.341 ] 00:11:04.341 }' 00:11:04.341 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.341 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.599 [2024-11-15 11:22:47.512580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.599 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.857 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.857 "name": "raid_bdev1", 00:11:04.857 "aliases": [ 00:11:04.857 "d1838b13-2ba7-4bed-8f8f-d3da5905631b" 00:11:04.857 ], 00:11:04.857 "product_name": "Raid Volume", 00:11:04.857 "block_size": 512, 00:11:04.857 "num_blocks": 63488, 00:11:04.857 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:04.857 "assigned_rate_limits": { 00:11:04.857 "rw_ios_per_sec": 0, 00:11:04.857 "rw_mbytes_per_sec": 0, 00:11:04.857 "r_mbytes_per_sec": 0, 00:11:04.857 "w_mbytes_per_sec": 0 00:11:04.857 }, 00:11:04.857 "claimed": false, 00:11:04.857 "zoned": false, 00:11:04.857 "supported_io_types": { 00:11:04.857 "read": true, 00:11:04.857 "write": true, 00:11:04.857 "unmap": false, 00:11:04.857 "flush": false, 00:11:04.857 "reset": true, 00:11:04.857 "nvme_admin": false, 00:11:04.857 "nvme_io": false, 00:11:04.857 "nvme_io_md": false, 00:11:04.857 "write_zeroes": true, 00:11:04.857 "zcopy": false, 00:11:04.857 "get_zone_info": false, 00:11:04.857 "zone_management": false, 00:11:04.857 "zone_append": false, 00:11:04.857 "compare": false, 00:11:04.857 "compare_and_write": false, 00:11:04.857 "abort": false, 00:11:04.857 "seek_hole": false, 00:11:04.857 "seek_data": false, 00:11:04.857 "copy": false, 00:11:04.857 "nvme_iov_md": false 00:11:04.857 }, 00:11:04.857 "memory_domains": [ 00:11:04.857 { 00:11:04.857 "dma_device_id": "system", 00:11:04.857 "dma_device_type": 1 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.857 "dma_device_type": 2 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "dma_device_id": "system", 00:11:04.857 "dma_device_type": 1 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.857 "dma_device_type": 2 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "dma_device_id": "system", 00:11:04.857 "dma_device_type": 1 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.857 "dma_device_type": 2 00:11:04.857 } 00:11:04.857 ], 00:11:04.857 "driver_specific": { 00:11:04.857 "raid": { 00:11:04.857 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:04.857 "strip_size_kb": 0, 00:11:04.857 "state": "online", 00:11:04.857 "raid_level": "raid1", 00:11:04.857 "superblock": true, 00:11:04.857 "num_base_bdevs": 3, 00:11:04.857 "num_base_bdevs_discovered": 3, 00:11:04.857 "num_base_bdevs_operational": 3, 00:11:04.857 "base_bdevs_list": [ 00:11:04.857 { 00:11:04.857 "name": "pt1", 00:11:04.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.857 "is_configured": true, 00:11:04.857 "data_offset": 2048, 00:11:04.857 "data_size": 63488 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "name": "pt2", 00:11:04.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.857 "is_configured": true, 00:11:04.857 "data_offset": 2048, 00:11:04.857 "data_size": 63488 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "name": "pt3", 00:11:04.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.857 "is_configured": true, 00:11:04.857 "data_offset": 2048, 00:11:04.857 "data_size": 63488 00:11:04.857 } 00:11:04.857 ] 00:11:04.857 } 00:11:04.857 } 00:11:04.857 }' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:04.858 pt2 00:11:04.858 pt3' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.858 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.116 [2024-11-15 11:22:47.832529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d1838b13-2ba7-4bed-8f8f-d3da5905631b '!=' d1838b13-2ba7-4bed-8f8f-d3da5905631b ']' 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.116 [2024-11-15 11:22:47.884258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.116 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.116 "name": "raid_bdev1", 00:11:05.116 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:05.116 "strip_size_kb": 0, 00:11:05.116 "state": "online", 00:11:05.116 "raid_level": "raid1", 00:11:05.116 "superblock": true, 00:11:05.116 "num_base_bdevs": 3, 00:11:05.117 "num_base_bdevs_discovered": 2, 00:11:05.117 "num_base_bdevs_operational": 2, 00:11:05.117 "base_bdevs_list": [ 00:11:05.117 { 00:11:05.117 "name": null, 00:11:05.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.117 "is_configured": false, 00:11:05.117 "data_offset": 0, 00:11:05.117 "data_size": 63488 00:11:05.117 }, 00:11:05.117 { 00:11:05.117 "name": "pt2", 00:11:05.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.117 "is_configured": true, 00:11:05.117 "data_offset": 2048, 00:11:05.117 "data_size": 63488 00:11:05.117 }, 00:11:05.117 { 00:11:05.117 "name": "pt3", 00:11:05.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.117 "is_configured": true, 00:11:05.117 "data_offset": 2048, 00:11:05.117 "data_size": 63488 00:11:05.117 } 00:11:05.117 ] 00:11:05.117 }' 00:11:05.117 11:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.117 11:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.683 [2024-11-15 11:22:48.404446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.683 [2024-11-15 11:22:48.404484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.683 [2024-11-15 11:22:48.404618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.683 [2024-11-15 11:22:48.404698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.683 [2024-11-15 11:22:48.404720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.683 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.683 [2024-11-15 11:22:48.488410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.683 [2024-11-15 11:22:48.488494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.683 [2024-11-15 11:22:48.488522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:05.683 [2024-11-15 11:22:48.488554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.683 [2024-11-15 11:22:48.491762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.683 [2024-11-15 11:22:48.491825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.683 [2024-11-15 11:22:48.491922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:05.683 [2024-11-15 11:22:48.491984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.684 pt2 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.684 "name": "raid_bdev1", 00:11:05.684 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:05.684 "strip_size_kb": 0, 00:11:05.684 "state": "configuring", 00:11:05.684 "raid_level": "raid1", 00:11:05.684 "superblock": true, 00:11:05.684 "num_base_bdevs": 3, 00:11:05.684 "num_base_bdevs_discovered": 1, 00:11:05.684 "num_base_bdevs_operational": 2, 00:11:05.684 "base_bdevs_list": [ 00:11:05.684 { 00:11:05.684 "name": null, 00:11:05.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.684 "is_configured": false, 00:11:05.684 "data_offset": 2048, 00:11:05.684 "data_size": 63488 00:11:05.684 }, 00:11:05.684 { 00:11:05.684 "name": "pt2", 00:11:05.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.684 "is_configured": true, 00:11:05.684 "data_offset": 2048, 00:11:05.684 "data_size": 63488 00:11:05.684 }, 00:11:05.684 { 00:11:05.684 "name": null, 00:11:05.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.684 "is_configured": false, 00:11:05.684 "data_offset": 2048, 00:11:05.684 "data_size": 63488 00:11:05.684 } 00:11:05.684 ] 00:11:05.684 }' 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.684 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.250 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:06.250 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:06.250 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:06.250 11:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.250 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.250 11:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.250 [2024-11-15 11:22:49.004654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.250 [2024-11-15 11:22:49.004754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.250 [2024-11-15 11:22:49.004786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:06.250 [2024-11-15 11:22:49.004804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.250 [2024-11-15 11:22:49.005419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.250 [2024-11-15 11:22:49.005453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.250 [2024-11-15 11:22:49.005599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:06.250 [2024-11-15 11:22:49.005645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.250 [2024-11-15 11:22:49.005795] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:06.250 [2024-11-15 11:22:49.005816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:06.250 [2024-11-15 11:22:49.006233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:06.250 [2024-11-15 11:22:49.006453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:06.250 [2024-11-15 11:22:49.006470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:06.250 [2024-11-15 11:22:49.006670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.250 pt3 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.250 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.251 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.251 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.251 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.251 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.251 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.251 "name": "raid_bdev1", 00:11:06.251 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:06.251 "strip_size_kb": 0, 00:11:06.251 "state": "online", 00:11:06.251 "raid_level": "raid1", 00:11:06.251 "superblock": true, 00:11:06.251 "num_base_bdevs": 3, 00:11:06.251 "num_base_bdevs_discovered": 2, 00:11:06.251 "num_base_bdevs_operational": 2, 00:11:06.251 "base_bdevs_list": [ 00:11:06.251 { 00:11:06.251 "name": null, 00:11:06.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.251 "is_configured": false, 00:11:06.251 "data_offset": 2048, 00:11:06.251 "data_size": 63488 00:11:06.251 }, 00:11:06.251 { 00:11:06.251 "name": "pt2", 00:11:06.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.251 "is_configured": true, 00:11:06.251 "data_offset": 2048, 00:11:06.251 "data_size": 63488 00:11:06.251 }, 00:11:06.251 { 00:11:06.251 "name": "pt3", 00:11:06.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.251 "is_configured": true, 00:11:06.251 "data_offset": 2048, 00:11:06.251 "data_size": 63488 00:11:06.251 } 00:11:06.251 ] 00:11:06.251 }' 00:11:06.251 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.251 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.816 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.816 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.816 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.816 [2024-11-15 11:22:49.528789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.816 [2024-11-15 11:22:49.528830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.816 [2024-11-15 11:22:49.528948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.817 [2024-11-15 11:22:49.529037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.817 [2024-11-15 11:22:49.529053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 [2024-11-15 11:22:49.596814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:06.817 [2024-11-15 11:22:49.596884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.817 [2024-11-15 11:22:49.596915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:06.817 [2024-11-15 11:22:49.596930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.817 [2024-11-15 11:22:49.600261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.817 [2024-11-15 11:22:49.600316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:06.817 [2024-11-15 11:22:49.600421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:06.817 [2024-11-15 11:22:49.600477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.817 [2024-11-15 11:22:49.600661] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:06.817 [2024-11-15 11:22:49.600709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.817 [2024-11-15 11:22:49.600762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:06.817 [2024-11-15 11:22:49.600830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.817 pt1 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.817 "name": "raid_bdev1", 00:11:06.817 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:06.817 "strip_size_kb": 0, 00:11:06.817 "state": "configuring", 00:11:06.817 "raid_level": "raid1", 00:11:06.817 "superblock": true, 00:11:06.817 "num_base_bdevs": 3, 00:11:06.817 "num_base_bdevs_discovered": 1, 00:11:06.817 "num_base_bdevs_operational": 2, 00:11:06.817 "base_bdevs_list": [ 00:11:06.817 { 00:11:06.817 "name": null, 00:11:06.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.817 "is_configured": false, 00:11:06.817 "data_offset": 2048, 00:11:06.817 "data_size": 63488 00:11:06.817 }, 00:11:06.817 { 00:11:06.817 "name": "pt2", 00:11:06.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.817 "is_configured": true, 00:11:06.817 "data_offset": 2048, 00:11:06.817 "data_size": 63488 00:11:06.817 }, 00:11:06.817 { 00:11:06.817 "name": null, 00:11:06.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.817 "is_configured": false, 00:11:06.817 "data_offset": 2048, 00:11:06.817 "data_size": 63488 00:11:06.817 } 00:11:06.817 ] 00:11:06.817 }' 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.817 11:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.383 [2024-11-15 11:22:50.185151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:07.383 [2024-11-15 11:22:50.185272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.383 [2024-11-15 11:22:50.185311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:07.383 [2024-11-15 11:22:50.185326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.383 [2024-11-15 11:22:50.186045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.383 [2024-11-15 11:22:50.186072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:07.383 [2024-11-15 11:22:50.186202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:07.383 [2024-11-15 11:22:50.186237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:07.383 [2024-11-15 11:22:50.186417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:07.383 [2024-11-15 11:22:50.186434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.383 [2024-11-15 11:22:50.186794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:07.383 [2024-11-15 11:22:50.186981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:07.383 [2024-11-15 11:22:50.187055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:07.383 [2024-11-15 11:22:50.187261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.383 pt3 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.383 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.383 "name": "raid_bdev1", 00:11:07.383 "uuid": "d1838b13-2ba7-4bed-8f8f-d3da5905631b", 00:11:07.383 "strip_size_kb": 0, 00:11:07.383 "state": "online", 00:11:07.384 "raid_level": "raid1", 00:11:07.384 "superblock": true, 00:11:07.384 "num_base_bdevs": 3, 00:11:07.384 "num_base_bdevs_discovered": 2, 00:11:07.384 "num_base_bdevs_operational": 2, 00:11:07.384 "base_bdevs_list": [ 00:11:07.384 { 00:11:07.384 "name": null, 00:11:07.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.384 "is_configured": false, 00:11:07.384 "data_offset": 2048, 00:11:07.384 "data_size": 63488 00:11:07.384 }, 00:11:07.384 { 00:11:07.384 "name": "pt2", 00:11:07.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.384 "is_configured": true, 00:11:07.384 "data_offset": 2048, 00:11:07.384 "data_size": 63488 00:11:07.384 }, 00:11:07.384 { 00:11:07.384 "name": "pt3", 00:11:07.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.384 "is_configured": true, 00:11:07.384 "data_offset": 2048, 00:11:07.384 "data_size": 63488 00:11:07.384 } 00:11:07.384 ] 00:11:07.384 }' 00:11:07.384 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.384 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:07.951 [2024-11-15 11:22:50.765719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d1838b13-2ba7-4bed-8f8f-d3da5905631b '!=' d1838b13-2ba7-4bed-8f8f-d3da5905631b ']' 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68589 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68589 ']' 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68589 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68589 00:11:07.951 killing process with pid 68589 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68589' 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68589 00:11:07.951 [2024-11-15 11:22:50.844695] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.951 11:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68589 00:11:07.951 [2024-11-15 11:22:50.844816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.951 [2024-11-15 11:22:50.844896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.951 [2024-11-15 11:22:50.844929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:08.210 [2024-11-15 11:22:51.118869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.610 11:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:09.610 00:11:09.610 real 0m8.671s 00:11:09.610 user 0m14.062s 00:11:09.610 sys 0m1.294s 00:11:09.610 11:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.610 11:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.610 ************************************ 00:11:09.610 END TEST raid_superblock_test 00:11:09.610 ************************************ 00:11:09.610 11:22:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:09.610 11:22:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:09.610 11:22:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.610 11:22:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.610 ************************************ 00:11:09.610 START TEST raid_read_error_test 00:11:09.610 ************************************ 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.610 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.974PNmCABz 00:11:09.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69046 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69046 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69046 ']' 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:09.611 11:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.611 [2024-11-15 11:22:52.422669] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:09.611 [2024-11-15 11:22:52.423101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69046 ] 00:11:09.869 [2024-11-15 11:22:52.610827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.869 [2024-11-15 11:22:52.760631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.127 [2024-11-15 11:22:52.989442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.127 [2024-11-15 11:22:52.989805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 BaseBdev1_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 true 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 [2024-11-15 11:22:53.461134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:10.697 [2024-11-15 11:22:53.461250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.697 [2024-11-15 11:22:53.461283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:10.697 [2024-11-15 11:22:53.461302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.697 [2024-11-15 11:22:53.464386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.697 [2024-11-15 11:22:53.464438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.697 BaseBdev1 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 BaseBdev2_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 true 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 [2024-11-15 11:22:53.523315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.697 [2024-11-15 11:22:53.523404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.697 [2024-11-15 11:22:53.523431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:10.697 [2024-11-15 11:22:53.523448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.697 [2024-11-15 11:22:53.526580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.697 BaseBdev2 00:11:10.697 [2024-11-15 11:22:53.526802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 BaseBdev3_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 true 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.697 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.697 [2024-11-15 11:22:53.595659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:10.698 [2024-11-15 11:22:53.595781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.698 [2024-11-15 11:22:53.595899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:10.698 [2024-11-15 11:22:53.596031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.698 [2024-11-15 11:22:53.599277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.698 [2024-11-15 11:22:53.599343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:10.698 BaseBdev3 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.698 [2024-11-15 11:22:53.603735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.698 [2024-11-15 11:22:53.606606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.698 [2024-11-15 11:22:53.606858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.698 [2024-11-15 11:22:53.607160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:10.698 [2024-11-15 11:22:53.607237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:10.698 [2024-11-15 11:22:53.607618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:10.698 [2024-11-15 11:22:53.607892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:10.698 [2024-11-15 11:22:53.607920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:10.698 [2024-11-15 11:22:53.608221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.698 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.956 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.956 "name": "raid_bdev1", 00:11:10.956 "uuid": "2e6b3029-a0aa-43c1-b7c9-a7f2324fdfbf", 00:11:10.956 "strip_size_kb": 0, 00:11:10.956 "state": "online", 00:11:10.956 "raid_level": "raid1", 00:11:10.956 "superblock": true, 00:11:10.956 "num_base_bdevs": 3, 00:11:10.956 "num_base_bdevs_discovered": 3, 00:11:10.956 "num_base_bdevs_operational": 3, 00:11:10.956 "base_bdevs_list": [ 00:11:10.956 { 00:11:10.956 "name": "BaseBdev1", 00:11:10.956 "uuid": "9878641b-45bf-583c-95bb-7d398d4b76ae", 00:11:10.956 "is_configured": true, 00:11:10.956 "data_offset": 2048, 00:11:10.956 "data_size": 63488 00:11:10.956 }, 00:11:10.956 { 00:11:10.956 "name": "BaseBdev2", 00:11:10.956 "uuid": "901374e0-e9d7-512f-a358-bb38455009f0", 00:11:10.956 "is_configured": true, 00:11:10.956 "data_offset": 2048, 00:11:10.956 "data_size": 63488 00:11:10.956 }, 00:11:10.956 { 00:11:10.956 "name": "BaseBdev3", 00:11:10.956 "uuid": "2bb586a8-ba28-595d-ad9b-c157d5da7b67", 00:11:10.956 "is_configured": true, 00:11:10.956 "data_offset": 2048, 00:11:10.956 "data_size": 63488 00:11:10.956 } 00:11:10.956 ] 00:11:10.956 }' 00:11:10.956 11:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.956 11:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.215 11:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:11.215 11:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:11.474 [2024-11-15 11:22:54.241910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.408 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.408 "name": "raid_bdev1", 00:11:12.408 "uuid": "2e6b3029-a0aa-43c1-b7c9-a7f2324fdfbf", 00:11:12.409 "strip_size_kb": 0, 00:11:12.409 "state": "online", 00:11:12.409 "raid_level": "raid1", 00:11:12.409 "superblock": true, 00:11:12.409 "num_base_bdevs": 3, 00:11:12.409 "num_base_bdevs_discovered": 3, 00:11:12.409 "num_base_bdevs_operational": 3, 00:11:12.409 "base_bdevs_list": [ 00:11:12.409 { 00:11:12.409 "name": "BaseBdev1", 00:11:12.409 "uuid": "9878641b-45bf-583c-95bb-7d398d4b76ae", 00:11:12.409 "is_configured": true, 00:11:12.409 "data_offset": 2048, 00:11:12.409 "data_size": 63488 00:11:12.409 }, 00:11:12.409 { 00:11:12.409 "name": "BaseBdev2", 00:11:12.409 "uuid": "901374e0-e9d7-512f-a358-bb38455009f0", 00:11:12.409 "is_configured": true, 00:11:12.409 "data_offset": 2048, 00:11:12.409 "data_size": 63488 00:11:12.409 }, 00:11:12.409 { 00:11:12.409 "name": "BaseBdev3", 00:11:12.409 "uuid": "2bb586a8-ba28-595d-ad9b-c157d5da7b67", 00:11:12.409 "is_configured": true, 00:11:12.409 "data_offset": 2048, 00:11:12.409 "data_size": 63488 00:11:12.409 } 00:11:12.409 ] 00:11:12.409 }' 00:11:12.409 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.409 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.976 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.976 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.976 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.976 [2024-11-15 11:22:55.684417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.976 [2024-11-15 11:22:55.684454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.976 { 00:11:12.976 "results": [ 00:11:12.976 { 00:11:12.976 "job": "raid_bdev1", 00:11:12.976 "core_mask": "0x1", 00:11:12.976 "workload": "randrw", 00:11:12.976 "percentage": 50, 00:11:12.976 "status": "finished", 00:11:12.976 "queue_depth": 1, 00:11:12.976 "io_size": 131072, 00:11:12.976 "runtime": 1.43984, 00:11:12.976 "iops": 8498.166462940326, 00:11:12.976 "mibps": 1062.2708078675407, 00:11:12.976 "io_failed": 0, 00:11:12.976 "io_timeout": 0, 00:11:12.976 "avg_latency_us": 113.34891854141283, 00:11:12.976 "min_latency_us": 40.49454545454545, 00:11:12.977 "max_latency_us": 1936.290909090909 00:11:12.977 } 00:11:12.977 ], 00:11:12.977 "core_count": 1 00:11:12.977 } 00:11:12.977 [2024-11-15 11:22:55.687849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.977 [2024-11-15 11:22:55.687913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.977 [2024-11-15 11:22:55.688051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.977 [2024-11-15 11:22:55.688068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69046 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69046 ']' 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69046 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69046 00:11:12.977 killing process with pid 69046 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69046' 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69046 00:11:12.977 [2024-11-15 11:22:55.720770] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.977 11:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69046 00:11:13.235 [2024-11-15 11:22:55.928019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.974PNmCABz 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:14.171 00:11:14.171 real 0m4.789s 00:11:14.171 user 0m5.830s 00:11:14.171 sys 0m0.683s 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.171 ************************************ 00:11:14.171 END TEST raid_read_error_test 00:11:14.171 ************************************ 00:11:14.171 11:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.431 11:22:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:14.431 11:22:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:14.431 11:22:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.431 11:22:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.431 ************************************ 00:11:14.431 START TEST raid_write_error_test 00:11:14.431 ************************************ 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YJlIMF9OXG 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69190 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69190 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69190 ']' 00:11:14.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:14.431 11:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.431 [2024-11-15 11:22:57.300455] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:14.432 [2024-11-15 11:22:57.300655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69190 ] 00:11:14.691 [2024-11-15 11:22:57.486014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.691 [2024-11-15 11:22:57.618773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.950 [2024-11-15 11:22:57.837119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.950 [2024-11-15 11:22:57.837239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 BaseBdev1_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 true 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 [2024-11-15 11:22:58.277580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:15.517 [2024-11-15 11:22:58.277822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.517 [2024-11-15 11:22:58.277864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:15.517 [2024-11-15 11:22:58.277885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.517 BaseBdev1 00:11:15.517 [2024-11-15 11:22:58.281330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.517 [2024-11-15 11:22:58.281375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 BaseBdev2_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 true 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 [2024-11-15 11:22:58.344380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:15.517 [2024-11-15 11:22:58.344454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.517 [2024-11-15 11:22:58.344483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:15.517 [2024-11-15 11:22:58.344502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.517 [2024-11-15 11:22:58.348130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.517 [2024-11-15 11:22:58.348279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:15.517 BaseBdev2 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 BaseBdev3_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 true 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.517 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.517 [2024-11-15 11:22:58.423025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:15.517 [2024-11-15 11:22:58.423302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.517 [2024-11-15 11:22:58.423344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:15.517 [2024-11-15 11:22:58.423365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.517 [2024-11-15 11:22:58.426436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.518 [2024-11-15 11:22:58.426656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:15.518 BaseBdev3 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.518 [2024-11-15 11:22:58.431357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.518 [2024-11-15 11:22:58.434103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.518 [2024-11-15 11:22:58.434379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.518 [2024-11-15 11:22:58.434827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:15.518 [2024-11-15 11:22:58.434949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.518 [2024-11-15 11:22:58.435324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:15.518 [2024-11-15 11:22:58.435744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:15.518 [2024-11-15 11:22:58.435871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:15.518 [2024-11-15 11:22:58.436348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.518 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.776 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.776 "name": "raid_bdev1", 00:11:15.776 "uuid": "d0b3b84a-3bea-477d-9d26-e798a1e02d86", 00:11:15.776 "strip_size_kb": 0, 00:11:15.776 "state": "online", 00:11:15.776 "raid_level": "raid1", 00:11:15.776 "superblock": true, 00:11:15.776 "num_base_bdevs": 3, 00:11:15.776 "num_base_bdevs_discovered": 3, 00:11:15.776 "num_base_bdevs_operational": 3, 00:11:15.776 "base_bdevs_list": [ 00:11:15.776 { 00:11:15.776 "name": "BaseBdev1", 00:11:15.776 "uuid": "8dd02f80-0953-5394-a65d-64437cc1ea5a", 00:11:15.776 "is_configured": true, 00:11:15.776 "data_offset": 2048, 00:11:15.776 "data_size": 63488 00:11:15.776 }, 00:11:15.776 { 00:11:15.776 "name": "BaseBdev2", 00:11:15.776 "uuid": "17948f6f-d916-51f0-ab42-e7a254fbc921", 00:11:15.776 "is_configured": true, 00:11:15.776 "data_offset": 2048, 00:11:15.776 "data_size": 63488 00:11:15.776 }, 00:11:15.776 { 00:11:15.776 "name": "BaseBdev3", 00:11:15.776 "uuid": "412aa465-c9ad-54ef-8ad1-3b4d59a4cd80", 00:11:15.776 "is_configured": true, 00:11:15.776 "data_offset": 2048, 00:11:15.776 "data_size": 63488 00:11:15.776 } 00:11:15.776 ] 00:11:15.776 }' 00:11:15.776 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.776 11:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.035 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:16.035 11:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:16.293 [2024-11-15 11:22:59.078010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.275 [2024-11-15 11:22:59.959517] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:17.275 [2024-11-15 11:22:59.959613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.275 [2024-11-15 11:22:59.959933] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.275 11:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.275 11:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.275 "name": "raid_bdev1", 00:11:17.275 "uuid": "d0b3b84a-3bea-477d-9d26-e798a1e02d86", 00:11:17.275 "strip_size_kb": 0, 00:11:17.275 "state": "online", 00:11:17.275 "raid_level": "raid1", 00:11:17.275 "superblock": true, 00:11:17.275 "num_base_bdevs": 3, 00:11:17.275 "num_base_bdevs_discovered": 2, 00:11:17.275 "num_base_bdevs_operational": 2, 00:11:17.275 "base_bdevs_list": [ 00:11:17.275 { 00:11:17.275 "name": null, 00:11:17.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.275 "is_configured": false, 00:11:17.275 "data_offset": 0, 00:11:17.275 "data_size": 63488 00:11:17.275 }, 00:11:17.275 { 00:11:17.275 "name": "BaseBdev2", 00:11:17.275 "uuid": "17948f6f-d916-51f0-ab42-e7a254fbc921", 00:11:17.275 "is_configured": true, 00:11:17.275 "data_offset": 2048, 00:11:17.275 "data_size": 63488 00:11:17.275 }, 00:11:17.275 { 00:11:17.275 "name": "BaseBdev3", 00:11:17.275 "uuid": "412aa465-c9ad-54ef-8ad1-3b4d59a4cd80", 00:11:17.275 "is_configured": true, 00:11:17.275 "data_offset": 2048, 00:11:17.275 "data_size": 63488 00:11:17.275 } 00:11:17.275 ] 00:11:17.275 }' 00:11:17.275 11:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.275 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.534 11:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.534 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.534 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.793 [2024-11-15 11:23:00.486087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.793 [2024-11-15 11:23:00.486130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.793 [2024-11-15 11:23:00.489723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.793 [2024-11-15 11:23:00.489801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.793 [2024-11-15 11:23:00.489915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.793 [2024-11-15 11:23:00.489942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:17.793 { 00:11:17.793 "results": [ 00:11:17.793 { 00:11:17.793 "job": "raid_bdev1", 00:11:17.793 "core_mask": "0x1", 00:11:17.793 "workload": "randrw", 00:11:17.793 "percentage": 50, 00:11:17.793 "status": "finished", 00:11:17.793 "queue_depth": 1, 00:11:17.793 "io_size": 131072, 00:11:17.793 "runtime": 1.405343, 00:11:17.793 "iops": 9531.48092672038, 00:11:17.793 "mibps": 1191.4351158400475, 00:11:17.793 "io_failed": 0, 00:11:17.793 "io_timeout": 0, 00:11:17.793 "avg_latency_us": 100.1956757270352, 00:11:17.793 "min_latency_us": 42.35636363636364, 00:11:17.793 "max_latency_us": 1906.5018181818182 00:11:17.793 } 00:11:17.793 ], 00:11:17.793 "core_count": 1 00:11:17.793 } 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69190 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69190 ']' 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69190 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69190 00:11:17.793 killing process with pid 69190 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69190' 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69190 00:11:17.793 [2024-11-15 11:23:00.525314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.793 11:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69190 00:11:17.793 [2024-11-15 11:23:00.731860] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YJlIMF9OXG 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:19.170 00:11:19.170 real 0m4.809s 00:11:19.170 user 0m5.824s 00:11:19.170 sys 0m0.676s 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:19.170 ************************************ 00:11:19.170 END TEST raid_write_error_test 00:11:19.170 ************************************ 00:11:19.170 11:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.170 11:23:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:19.170 11:23:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:19.170 11:23:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:19.170 11:23:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:19.170 11:23:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.170 11:23:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.170 ************************************ 00:11:19.170 START TEST raid_state_function_test 00:11:19.170 ************************************ 00:11:19.170 11:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:11:19.170 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:19.170 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:19.170 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:19.170 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:19.170 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:19.170 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.170 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69335 00:11:19.171 Process raid pid: 69335 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69335' 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69335 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69335 ']' 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:19.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:19.171 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.430 [2024-11-15 11:23:02.122413] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:19.430 [2024-11-15 11:23:02.122614] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.430 [2024-11-15 11:23:02.311686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.689 [2024-11-15 11:23:02.457891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.948 [2024-11-15 11:23:02.684356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.948 [2024-11-15 11:23:02.684396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.205 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.205 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:20.205 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.205 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.205 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.205 [2024-11-15 11:23:03.110140] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.205 [2024-11-15 11:23:03.110433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.206 [2024-11-15 11:23:03.110597] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.206 [2024-11-15 11:23:03.110642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.206 [2024-11-15 11:23:03.110656] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.206 [2024-11-15 11:23:03.110672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.206 [2024-11-15 11:23:03.110682] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.206 [2024-11-15 11:23:03.110698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.206 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.464 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.464 "name": "Existed_Raid", 00:11:20.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.464 "strip_size_kb": 64, 00:11:20.464 "state": "configuring", 00:11:20.464 "raid_level": "raid0", 00:11:20.464 "superblock": false, 00:11:20.464 "num_base_bdevs": 4, 00:11:20.464 "num_base_bdevs_discovered": 0, 00:11:20.464 "num_base_bdevs_operational": 4, 00:11:20.464 "base_bdevs_list": [ 00:11:20.464 { 00:11:20.464 "name": "BaseBdev1", 00:11:20.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.464 "is_configured": false, 00:11:20.464 "data_offset": 0, 00:11:20.464 "data_size": 0 00:11:20.464 }, 00:11:20.464 { 00:11:20.464 "name": "BaseBdev2", 00:11:20.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.464 "is_configured": false, 00:11:20.464 "data_offset": 0, 00:11:20.464 "data_size": 0 00:11:20.464 }, 00:11:20.464 { 00:11:20.464 "name": "BaseBdev3", 00:11:20.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.464 "is_configured": false, 00:11:20.464 "data_offset": 0, 00:11:20.464 "data_size": 0 00:11:20.464 }, 00:11:20.464 { 00:11:20.464 "name": "BaseBdev4", 00:11:20.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.464 "is_configured": false, 00:11:20.464 "data_offset": 0, 00:11:20.464 "data_size": 0 00:11:20.464 } 00:11:20.464 ] 00:11:20.464 }' 00:11:20.464 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.464 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.723 [2024-11-15 11:23:03.642218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.723 [2024-11-15 11:23:03.642404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.723 [2024-11-15 11:23:03.650196] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.723 [2024-11-15 11:23:03.650378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.723 [2024-11-15 11:23:03.650500] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.723 [2024-11-15 11:23:03.650536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.723 [2024-11-15 11:23:03.650548] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.723 [2024-11-15 11:23:03.650564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.723 [2024-11-15 11:23:03.650574] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.723 [2024-11-15 11:23:03.650589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.723 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.981 [2024-11-15 11:23:03.699979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.981 BaseBdev1 00:11:20.981 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.981 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.982 [ 00:11:20.982 { 00:11:20.982 "name": "BaseBdev1", 00:11:20.982 "aliases": [ 00:11:20.982 "72ece926-4000-4d28-892f-02846972303b" 00:11:20.982 ], 00:11:20.982 "product_name": "Malloc disk", 00:11:20.982 "block_size": 512, 00:11:20.982 "num_blocks": 65536, 00:11:20.982 "uuid": "72ece926-4000-4d28-892f-02846972303b", 00:11:20.982 "assigned_rate_limits": { 00:11:20.982 "rw_ios_per_sec": 0, 00:11:20.982 "rw_mbytes_per_sec": 0, 00:11:20.982 "r_mbytes_per_sec": 0, 00:11:20.982 "w_mbytes_per_sec": 0 00:11:20.982 }, 00:11:20.982 "claimed": true, 00:11:20.982 "claim_type": "exclusive_write", 00:11:20.982 "zoned": false, 00:11:20.982 "supported_io_types": { 00:11:20.982 "read": true, 00:11:20.982 "write": true, 00:11:20.982 "unmap": true, 00:11:20.982 "flush": true, 00:11:20.982 "reset": true, 00:11:20.982 "nvme_admin": false, 00:11:20.982 "nvme_io": false, 00:11:20.982 "nvme_io_md": false, 00:11:20.982 "write_zeroes": true, 00:11:20.982 "zcopy": true, 00:11:20.982 "get_zone_info": false, 00:11:20.982 "zone_management": false, 00:11:20.982 "zone_append": false, 00:11:20.982 "compare": false, 00:11:20.982 "compare_and_write": false, 00:11:20.982 "abort": true, 00:11:20.982 "seek_hole": false, 00:11:20.982 "seek_data": false, 00:11:20.982 "copy": true, 00:11:20.982 "nvme_iov_md": false 00:11:20.982 }, 00:11:20.982 "memory_domains": [ 00:11:20.982 { 00:11:20.982 "dma_device_id": "system", 00:11:20.982 "dma_device_type": 1 00:11:20.982 }, 00:11:20.982 { 00:11:20.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.982 "dma_device_type": 2 00:11:20.982 } 00:11:20.982 ], 00:11:20.982 "driver_specific": {} 00:11:20.982 } 00:11:20.982 ] 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.982 "name": "Existed_Raid", 00:11:20.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.982 "strip_size_kb": 64, 00:11:20.982 "state": "configuring", 00:11:20.982 "raid_level": "raid0", 00:11:20.982 "superblock": false, 00:11:20.982 "num_base_bdevs": 4, 00:11:20.982 "num_base_bdevs_discovered": 1, 00:11:20.982 "num_base_bdevs_operational": 4, 00:11:20.982 "base_bdevs_list": [ 00:11:20.982 { 00:11:20.982 "name": "BaseBdev1", 00:11:20.982 "uuid": "72ece926-4000-4d28-892f-02846972303b", 00:11:20.982 "is_configured": true, 00:11:20.982 "data_offset": 0, 00:11:20.982 "data_size": 65536 00:11:20.982 }, 00:11:20.982 { 00:11:20.982 "name": "BaseBdev2", 00:11:20.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.982 "is_configured": false, 00:11:20.982 "data_offset": 0, 00:11:20.982 "data_size": 0 00:11:20.982 }, 00:11:20.982 { 00:11:20.982 "name": "BaseBdev3", 00:11:20.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.982 "is_configured": false, 00:11:20.982 "data_offset": 0, 00:11:20.982 "data_size": 0 00:11:20.982 }, 00:11:20.982 { 00:11:20.982 "name": "BaseBdev4", 00:11:20.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.982 "is_configured": false, 00:11:20.982 "data_offset": 0, 00:11:20.982 "data_size": 0 00:11:20.982 } 00:11:20.982 ] 00:11:20.982 }' 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.982 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.550 [2024-11-15 11:23:04.268230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.550 [2024-11-15 11:23:04.268315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.550 [2024-11-15 11:23:04.276287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.550 [2024-11-15 11:23:04.279352] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.550 [2024-11-15 11:23:04.279617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.550 [2024-11-15 11:23:04.279752] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.550 [2024-11-15 11:23:04.279816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.550 [2024-11-15 11:23:04.279957] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.550 [2024-11-15 11:23:04.280017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.550 "name": "Existed_Raid", 00:11:21.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.550 "strip_size_kb": 64, 00:11:21.550 "state": "configuring", 00:11:21.550 "raid_level": "raid0", 00:11:21.550 "superblock": false, 00:11:21.550 "num_base_bdevs": 4, 00:11:21.550 "num_base_bdevs_discovered": 1, 00:11:21.550 "num_base_bdevs_operational": 4, 00:11:21.550 "base_bdevs_list": [ 00:11:21.550 { 00:11:21.550 "name": "BaseBdev1", 00:11:21.550 "uuid": "72ece926-4000-4d28-892f-02846972303b", 00:11:21.550 "is_configured": true, 00:11:21.550 "data_offset": 0, 00:11:21.550 "data_size": 65536 00:11:21.550 }, 00:11:21.550 { 00:11:21.550 "name": "BaseBdev2", 00:11:21.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.550 "is_configured": false, 00:11:21.550 "data_offset": 0, 00:11:21.550 "data_size": 0 00:11:21.550 }, 00:11:21.550 { 00:11:21.550 "name": "BaseBdev3", 00:11:21.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.550 "is_configured": false, 00:11:21.550 "data_offset": 0, 00:11:21.550 "data_size": 0 00:11:21.550 }, 00:11:21.550 { 00:11:21.550 "name": "BaseBdev4", 00:11:21.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.550 "is_configured": false, 00:11:21.550 "data_offset": 0, 00:11:21.550 "data_size": 0 00:11:21.550 } 00:11:21.550 ] 00:11:21.550 }' 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.550 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.118 BaseBdev2 00:11:22.118 [2024-11-15 11:23:04.869593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.118 [ 00:11:22.118 { 00:11:22.118 "name": "BaseBdev2", 00:11:22.118 "aliases": [ 00:11:22.118 "d7a11fe2-1fe0-4e7f-9455-a9d787a632ff" 00:11:22.118 ], 00:11:22.118 "product_name": "Malloc disk", 00:11:22.118 "block_size": 512, 00:11:22.118 "num_blocks": 65536, 00:11:22.118 "uuid": "d7a11fe2-1fe0-4e7f-9455-a9d787a632ff", 00:11:22.118 "assigned_rate_limits": { 00:11:22.118 "rw_ios_per_sec": 0, 00:11:22.118 "rw_mbytes_per_sec": 0, 00:11:22.118 "r_mbytes_per_sec": 0, 00:11:22.118 "w_mbytes_per_sec": 0 00:11:22.118 }, 00:11:22.118 "claimed": true, 00:11:22.118 "claim_type": "exclusive_write", 00:11:22.118 "zoned": false, 00:11:22.118 "supported_io_types": { 00:11:22.118 "read": true, 00:11:22.118 "write": true, 00:11:22.118 "unmap": true, 00:11:22.118 "flush": true, 00:11:22.118 "reset": true, 00:11:22.118 "nvme_admin": false, 00:11:22.118 "nvme_io": false, 00:11:22.118 "nvme_io_md": false, 00:11:22.118 "write_zeroes": true, 00:11:22.118 "zcopy": true, 00:11:22.118 "get_zone_info": false, 00:11:22.118 "zone_management": false, 00:11:22.118 "zone_append": false, 00:11:22.118 "compare": false, 00:11:22.118 "compare_and_write": false, 00:11:22.118 "abort": true, 00:11:22.118 "seek_hole": false, 00:11:22.118 "seek_data": false, 00:11:22.118 "copy": true, 00:11:22.118 "nvme_iov_md": false 00:11:22.118 }, 00:11:22.118 "memory_domains": [ 00:11:22.118 { 00:11:22.118 "dma_device_id": "system", 00:11:22.118 "dma_device_type": 1 00:11:22.118 }, 00:11:22.118 { 00:11:22.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.118 "dma_device_type": 2 00:11:22.118 } 00:11:22.118 ], 00:11:22.118 "driver_specific": {} 00:11:22.118 } 00:11:22.118 ] 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:22.118 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.119 "name": "Existed_Raid", 00:11:22.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.119 "strip_size_kb": 64, 00:11:22.119 "state": "configuring", 00:11:22.119 "raid_level": "raid0", 00:11:22.119 "superblock": false, 00:11:22.119 "num_base_bdevs": 4, 00:11:22.119 "num_base_bdevs_discovered": 2, 00:11:22.119 "num_base_bdevs_operational": 4, 00:11:22.119 "base_bdevs_list": [ 00:11:22.119 { 00:11:22.119 "name": "BaseBdev1", 00:11:22.119 "uuid": "72ece926-4000-4d28-892f-02846972303b", 00:11:22.119 "is_configured": true, 00:11:22.119 "data_offset": 0, 00:11:22.119 "data_size": 65536 00:11:22.119 }, 00:11:22.119 { 00:11:22.119 "name": "BaseBdev2", 00:11:22.119 "uuid": "d7a11fe2-1fe0-4e7f-9455-a9d787a632ff", 00:11:22.119 "is_configured": true, 00:11:22.119 "data_offset": 0, 00:11:22.119 "data_size": 65536 00:11:22.119 }, 00:11:22.119 { 00:11:22.119 "name": "BaseBdev3", 00:11:22.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.119 "is_configured": false, 00:11:22.119 "data_offset": 0, 00:11:22.119 "data_size": 0 00:11:22.119 }, 00:11:22.119 { 00:11:22.119 "name": "BaseBdev4", 00:11:22.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.119 "is_configured": false, 00:11:22.119 "data_offset": 0, 00:11:22.119 "data_size": 0 00:11:22.119 } 00:11:22.119 ] 00:11:22.119 }' 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.119 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.687 [2024-11-15 11:23:05.469849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.687 BaseBdev3 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.687 [ 00:11:22.687 { 00:11:22.687 "name": "BaseBdev3", 00:11:22.687 "aliases": [ 00:11:22.687 "3bf3b6fe-22db-4adf-bc10-5b10308877a8" 00:11:22.687 ], 00:11:22.687 "product_name": "Malloc disk", 00:11:22.687 "block_size": 512, 00:11:22.687 "num_blocks": 65536, 00:11:22.687 "uuid": "3bf3b6fe-22db-4adf-bc10-5b10308877a8", 00:11:22.687 "assigned_rate_limits": { 00:11:22.687 "rw_ios_per_sec": 0, 00:11:22.687 "rw_mbytes_per_sec": 0, 00:11:22.687 "r_mbytes_per_sec": 0, 00:11:22.687 "w_mbytes_per_sec": 0 00:11:22.687 }, 00:11:22.687 "claimed": true, 00:11:22.687 "claim_type": "exclusive_write", 00:11:22.687 "zoned": false, 00:11:22.687 "supported_io_types": { 00:11:22.687 "read": true, 00:11:22.687 "write": true, 00:11:22.687 "unmap": true, 00:11:22.687 "flush": true, 00:11:22.687 "reset": true, 00:11:22.687 "nvme_admin": false, 00:11:22.687 "nvme_io": false, 00:11:22.687 "nvme_io_md": false, 00:11:22.687 "write_zeroes": true, 00:11:22.687 "zcopy": true, 00:11:22.687 "get_zone_info": false, 00:11:22.687 "zone_management": false, 00:11:22.687 "zone_append": false, 00:11:22.687 "compare": false, 00:11:22.687 "compare_and_write": false, 00:11:22.687 "abort": true, 00:11:22.687 "seek_hole": false, 00:11:22.687 "seek_data": false, 00:11:22.687 "copy": true, 00:11:22.687 "nvme_iov_md": false 00:11:22.687 }, 00:11:22.687 "memory_domains": [ 00:11:22.687 { 00:11:22.687 "dma_device_id": "system", 00:11:22.687 "dma_device_type": 1 00:11:22.687 }, 00:11:22.687 { 00:11:22.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.687 "dma_device_type": 2 00:11:22.687 } 00:11:22.687 ], 00:11:22.687 "driver_specific": {} 00:11:22.687 } 00:11:22.687 ] 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.687 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.687 "name": "Existed_Raid", 00:11:22.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.687 "strip_size_kb": 64, 00:11:22.687 "state": "configuring", 00:11:22.687 "raid_level": "raid0", 00:11:22.687 "superblock": false, 00:11:22.687 "num_base_bdevs": 4, 00:11:22.687 "num_base_bdevs_discovered": 3, 00:11:22.687 "num_base_bdevs_operational": 4, 00:11:22.687 "base_bdevs_list": [ 00:11:22.687 { 00:11:22.687 "name": "BaseBdev1", 00:11:22.687 "uuid": "72ece926-4000-4d28-892f-02846972303b", 00:11:22.687 "is_configured": true, 00:11:22.687 "data_offset": 0, 00:11:22.687 "data_size": 65536 00:11:22.687 }, 00:11:22.687 { 00:11:22.687 "name": "BaseBdev2", 00:11:22.687 "uuid": "d7a11fe2-1fe0-4e7f-9455-a9d787a632ff", 00:11:22.687 "is_configured": true, 00:11:22.687 "data_offset": 0, 00:11:22.687 "data_size": 65536 00:11:22.687 }, 00:11:22.687 { 00:11:22.687 "name": "BaseBdev3", 00:11:22.687 "uuid": "3bf3b6fe-22db-4adf-bc10-5b10308877a8", 00:11:22.687 "is_configured": true, 00:11:22.687 "data_offset": 0, 00:11:22.687 "data_size": 65536 00:11:22.687 }, 00:11:22.687 { 00:11:22.687 "name": "BaseBdev4", 00:11:22.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.688 "is_configured": false, 00:11:22.688 "data_offset": 0, 00:11:22.688 "data_size": 0 00:11:22.688 } 00:11:22.688 ] 00:11:22.688 }' 00:11:22.688 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.688 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.256 [2024-11-15 11:23:06.086570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.256 BaseBdev4 00:11:23.256 [2024-11-15 11:23:06.086888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:23.256 [2024-11-15 11:23:06.086915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:23.256 [2024-11-15 11:23:06.087407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:23.256 [2024-11-15 11:23:06.087647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:23.256 [2024-11-15 11:23:06.087669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:23.256 [2024-11-15 11:23:06.088010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.256 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.256 [ 00:11:23.256 { 00:11:23.256 "name": "BaseBdev4", 00:11:23.256 "aliases": [ 00:11:23.256 "87bc1482-0231-4b9a-8520-1759688ec175" 00:11:23.256 ], 00:11:23.256 "product_name": "Malloc disk", 00:11:23.256 "block_size": 512, 00:11:23.256 "num_blocks": 65536, 00:11:23.256 "uuid": "87bc1482-0231-4b9a-8520-1759688ec175", 00:11:23.256 "assigned_rate_limits": { 00:11:23.256 "rw_ios_per_sec": 0, 00:11:23.256 "rw_mbytes_per_sec": 0, 00:11:23.256 "r_mbytes_per_sec": 0, 00:11:23.256 "w_mbytes_per_sec": 0 00:11:23.256 }, 00:11:23.256 "claimed": true, 00:11:23.256 "claim_type": "exclusive_write", 00:11:23.256 "zoned": false, 00:11:23.256 "supported_io_types": { 00:11:23.256 "read": true, 00:11:23.256 "write": true, 00:11:23.256 "unmap": true, 00:11:23.256 "flush": true, 00:11:23.256 "reset": true, 00:11:23.256 "nvme_admin": false, 00:11:23.256 "nvme_io": false, 00:11:23.256 "nvme_io_md": false, 00:11:23.256 "write_zeroes": true, 00:11:23.256 "zcopy": true, 00:11:23.256 "get_zone_info": false, 00:11:23.256 "zone_management": false, 00:11:23.256 "zone_append": false, 00:11:23.256 "compare": false, 00:11:23.256 "compare_and_write": false, 00:11:23.256 "abort": true, 00:11:23.256 "seek_hole": false, 00:11:23.256 "seek_data": false, 00:11:23.256 "copy": true, 00:11:23.256 "nvme_iov_md": false 00:11:23.256 }, 00:11:23.256 "memory_domains": [ 00:11:23.256 { 00:11:23.256 "dma_device_id": "system", 00:11:23.256 "dma_device_type": 1 00:11:23.256 }, 00:11:23.256 { 00:11:23.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.256 "dma_device_type": 2 00:11:23.256 } 00:11:23.256 ], 00:11:23.257 "driver_specific": {} 00:11:23.257 } 00:11:23.257 ] 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.257 "name": "Existed_Raid", 00:11:23.257 "uuid": "bef91c22-d24d-4024-bbb9-f4c4bc26f814", 00:11:23.257 "strip_size_kb": 64, 00:11:23.257 "state": "online", 00:11:23.257 "raid_level": "raid0", 00:11:23.257 "superblock": false, 00:11:23.257 "num_base_bdevs": 4, 00:11:23.257 "num_base_bdevs_discovered": 4, 00:11:23.257 "num_base_bdevs_operational": 4, 00:11:23.257 "base_bdevs_list": [ 00:11:23.257 { 00:11:23.257 "name": "BaseBdev1", 00:11:23.257 "uuid": "72ece926-4000-4d28-892f-02846972303b", 00:11:23.257 "is_configured": true, 00:11:23.257 "data_offset": 0, 00:11:23.257 "data_size": 65536 00:11:23.257 }, 00:11:23.257 { 00:11:23.257 "name": "BaseBdev2", 00:11:23.257 "uuid": "d7a11fe2-1fe0-4e7f-9455-a9d787a632ff", 00:11:23.257 "is_configured": true, 00:11:23.257 "data_offset": 0, 00:11:23.257 "data_size": 65536 00:11:23.257 }, 00:11:23.257 { 00:11:23.257 "name": "BaseBdev3", 00:11:23.257 "uuid": "3bf3b6fe-22db-4adf-bc10-5b10308877a8", 00:11:23.257 "is_configured": true, 00:11:23.257 "data_offset": 0, 00:11:23.257 "data_size": 65536 00:11:23.257 }, 00:11:23.257 { 00:11:23.257 "name": "BaseBdev4", 00:11:23.257 "uuid": "87bc1482-0231-4b9a-8520-1759688ec175", 00:11:23.257 "is_configured": true, 00:11:23.257 "data_offset": 0, 00:11:23.257 "data_size": 65536 00:11:23.257 } 00:11:23.257 ] 00:11:23.257 }' 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.257 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.825 [2024-11-15 11:23:06.643142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.825 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.825 "name": "Existed_Raid", 00:11:23.825 "aliases": [ 00:11:23.825 "bef91c22-d24d-4024-bbb9-f4c4bc26f814" 00:11:23.825 ], 00:11:23.825 "product_name": "Raid Volume", 00:11:23.825 "block_size": 512, 00:11:23.825 "num_blocks": 262144, 00:11:23.825 "uuid": "bef91c22-d24d-4024-bbb9-f4c4bc26f814", 00:11:23.825 "assigned_rate_limits": { 00:11:23.825 "rw_ios_per_sec": 0, 00:11:23.825 "rw_mbytes_per_sec": 0, 00:11:23.825 "r_mbytes_per_sec": 0, 00:11:23.825 "w_mbytes_per_sec": 0 00:11:23.825 }, 00:11:23.825 "claimed": false, 00:11:23.825 "zoned": false, 00:11:23.825 "supported_io_types": { 00:11:23.825 "read": true, 00:11:23.825 "write": true, 00:11:23.825 "unmap": true, 00:11:23.825 "flush": true, 00:11:23.825 "reset": true, 00:11:23.825 "nvme_admin": false, 00:11:23.825 "nvme_io": false, 00:11:23.825 "nvme_io_md": false, 00:11:23.825 "write_zeroes": true, 00:11:23.825 "zcopy": false, 00:11:23.825 "get_zone_info": false, 00:11:23.825 "zone_management": false, 00:11:23.825 "zone_append": false, 00:11:23.825 "compare": false, 00:11:23.825 "compare_and_write": false, 00:11:23.825 "abort": false, 00:11:23.825 "seek_hole": false, 00:11:23.825 "seek_data": false, 00:11:23.825 "copy": false, 00:11:23.825 "nvme_iov_md": false 00:11:23.825 }, 00:11:23.825 "memory_domains": [ 00:11:23.825 { 00:11:23.825 "dma_device_id": "system", 00:11:23.825 "dma_device_type": 1 00:11:23.825 }, 00:11:23.825 { 00:11:23.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.825 "dma_device_type": 2 00:11:23.825 }, 00:11:23.825 { 00:11:23.825 "dma_device_id": "system", 00:11:23.825 "dma_device_type": 1 00:11:23.825 }, 00:11:23.825 { 00:11:23.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.825 "dma_device_type": 2 00:11:23.825 }, 00:11:23.825 { 00:11:23.825 "dma_device_id": "system", 00:11:23.825 "dma_device_type": 1 00:11:23.825 }, 00:11:23.825 { 00:11:23.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.825 "dma_device_type": 2 00:11:23.826 }, 00:11:23.826 { 00:11:23.826 "dma_device_id": "system", 00:11:23.826 "dma_device_type": 1 00:11:23.826 }, 00:11:23.826 { 00:11:23.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.826 "dma_device_type": 2 00:11:23.826 } 00:11:23.826 ], 00:11:23.826 "driver_specific": { 00:11:23.826 "raid": { 00:11:23.826 "uuid": "bef91c22-d24d-4024-bbb9-f4c4bc26f814", 00:11:23.826 "strip_size_kb": 64, 00:11:23.826 "state": "online", 00:11:23.826 "raid_level": "raid0", 00:11:23.826 "superblock": false, 00:11:23.826 "num_base_bdevs": 4, 00:11:23.826 "num_base_bdevs_discovered": 4, 00:11:23.826 "num_base_bdevs_operational": 4, 00:11:23.826 "base_bdevs_list": [ 00:11:23.826 { 00:11:23.826 "name": "BaseBdev1", 00:11:23.826 "uuid": "72ece926-4000-4d28-892f-02846972303b", 00:11:23.826 "is_configured": true, 00:11:23.826 "data_offset": 0, 00:11:23.826 "data_size": 65536 00:11:23.826 }, 00:11:23.826 { 00:11:23.826 "name": "BaseBdev2", 00:11:23.826 "uuid": "d7a11fe2-1fe0-4e7f-9455-a9d787a632ff", 00:11:23.826 "is_configured": true, 00:11:23.826 "data_offset": 0, 00:11:23.826 "data_size": 65536 00:11:23.826 }, 00:11:23.826 { 00:11:23.826 "name": "BaseBdev3", 00:11:23.826 "uuid": "3bf3b6fe-22db-4adf-bc10-5b10308877a8", 00:11:23.826 "is_configured": true, 00:11:23.826 "data_offset": 0, 00:11:23.826 "data_size": 65536 00:11:23.826 }, 00:11:23.826 { 00:11:23.826 "name": "BaseBdev4", 00:11:23.826 "uuid": "87bc1482-0231-4b9a-8520-1759688ec175", 00:11:23.826 "is_configured": true, 00:11:23.826 "data_offset": 0, 00:11:23.826 "data_size": 65536 00:11:23.826 } 00:11:23.826 ] 00:11:23.826 } 00:11:23.826 } 00:11:23.826 }' 00:11:23.826 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.826 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:23.826 BaseBdev2 00:11:23.826 BaseBdev3 00:11:23.826 BaseBdev4' 00:11:23.826 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.086 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.086 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.086 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.086 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.086 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.086 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.086 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.086 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.086 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.086 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:24.086 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.086 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.086 [2024-11-15 11:23:07.011034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.086 [2024-11-15 11:23:07.011255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.086 [2024-11-15 11:23:07.011435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.345 "name": "Existed_Raid", 00:11:24.345 "uuid": "bef91c22-d24d-4024-bbb9-f4c4bc26f814", 00:11:24.345 "strip_size_kb": 64, 00:11:24.345 "state": "offline", 00:11:24.345 "raid_level": "raid0", 00:11:24.345 "superblock": false, 00:11:24.345 "num_base_bdevs": 4, 00:11:24.345 "num_base_bdevs_discovered": 3, 00:11:24.345 "num_base_bdevs_operational": 3, 00:11:24.345 "base_bdevs_list": [ 00:11:24.345 { 00:11:24.345 "name": null, 00:11:24.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.345 "is_configured": false, 00:11:24.345 "data_offset": 0, 00:11:24.345 "data_size": 65536 00:11:24.345 }, 00:11:24.345 { 00:11:24.345 "name": "BaseBdev2", 00:11:24.345 "uuid": "d7a11fe2-1fe0-4e7f-9455-a9d787a632ff", 00:11:24.345 "is_configured": true, 00:11:24.345 "data_offset": 0, 00:11:24.345 "data_size": 65536 00:11:24.345 }, 00:11:24.345 { 00:11:24.345 "name": "BaseBdev3", 00:11:24.345 "uuid": "3bf3b6fe-22db-4adf-bc10-5b10308877a8", 00:11:24.345 "is_configured": true, 00:11:24.345 "data_offset": 0, 00:11:24.345 "data_size": 65536 00:11:24.345 }, 00:11:24.345 { 00:11:24.345 "name": "BaseBdev4", 00:11:24.345 "uuid": "87bc1482-0231-4b9a-8520-1759688ec175", 00:11:24.345 "is_configured": true, 00:11:24.345 "data_offset": 0, 00:11:24.345 "data_size": 65536 00:11:24.345 } 00:11:24.345 ] 00:11:24.345 }' 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.345 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.912 [2024-11-15 11:23:07.693718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.912 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.913 [2024-11-15 11:23:07.837728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.171 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.172 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:25.172 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.172 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.172 [2024-11-15 11:23:07.984571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:25.172 [2024-11-15 11:23:07.984806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:25.172 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.172 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.172 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.172 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.172 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.172 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:25.172 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.172 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.431 BaseBdev2 00:11:25.431 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 [ 00:11:25.432 { 00:11:25.432 "name": "BaseBdev2", 00:11:25.432 "aliases": [ 00:11:25.432 "8fb97046-5199-4988-a0c7-114b048e269a" 00:11:25.432 ], 00:11:25.432 "product_name": "Malloc disk", 00:11:25.432 "block_size": 512, 00:11:25.432 "num_blocks": 65536, 00:11:25.432 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:25.432 "assigned_rate_limits": { 00:11:25.432 "rw_ios_per_sec": 0, 00:11:25.432 "rw_mbytes_per_sec": 0, 00:11:25.432 "r_mbytes_per_sec": 0, 00:11:25.432 "w_mbytes_per_sec": 0 00:11:25.432 }, 00:11:25.432 "claimed": false, 00:11:25.432 "zoned": false, 00:11:25.432 "supported_io_types": { 00:11:25.432 "read": true, 00:11:25.432 "write": true, 00:11:25.432 "unmap": true, 00:11:25.432 "flush": true, 00:11:25.432 "reset": true, 00:11:25.432 "nvme_admin": false, 00:11:25.432 "nvme_io": false, 00:11:25.432 "nvme_io_md": false, 00:11:25.432 "write_zeroes": true, 00:11:25.432 "zcopy": true, 00:11:25.432 "get_zone_info": false, 00:11:25.432 "zone_management": false, 00:11:25.432 "zone_append": false, 00:11:25.432 "compare": false, 00:11:25.432 "compare_and_write": false, 00:11:25.432 "abort": true, 00:11:25.432 "seek_hole": false, 00:11:25.432 "seek_data": false, 00:11:25.432 "copy": true, 00:11:25.432 "nvme_iov_md": false 00:11:25.432 }, 00:11:25.432 "memory_domains": [ 00:11:25.432 { 00:11:25.432 "dma_device_id": "system", 00:11:25.432 "dma_device_type": 1 00:11:25.432 }, 00:11:25.432 { 00:11:25.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.432 "dma_device_type": 2 00:11:25.432 } 00:11:25.432 ], 00:11:25.432 "driver_specific": {} 00:11:25.432 } 00:11:25.432 ] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 BaseBdev3 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 [ 00:11:25.432 { 00:11:25.432 "name": "BaseBdev3", 00:11:25.432 "aliases": [ 00:11:25.432 "abe7aea3-a4b0-46f2-8d6f-59033f80ce61" 00:11:25.432 ], 00:11:25.432 "product_name": "Malloc disk", 00:11:25.432 "block_size": 512, 00:11:25.432 "num_blocks": 65536, 00:11:25.432 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:25.432 "assigned_rate_limits": { 00:11:25.432 "rw_ios_per_sec": 0, 00:11:25.432 "rw_mbytes_per_sec": 0, 00:11:25.432 "r_mbytes_per_sec": 0, 00:11:25.432 "w_mbytes_per_sec": 0 00:11:25.432 }, 00:11:25.432 "claimed": false, 00:11:25.432 "zoned": false, 00:11:25.432 "supported_io_types": { 00:11:25.432 "read": true, 00:11:25.432 "write": true, 00:11:25.432 "unmap": true, 00:11:25.432 "flush": true, 00:11:25.432 "reset": true, 00:11:25.432 "nvme_admin": false, 00:11:25.432 "nvme_io": false, 00:11:25.432 "nvme_io_md": false, 00:11:25.432 "write_zeroes": true, 00:11:25.432 "zcopy": true, 00:11:25.432 "get_zone_info": false, 00:11:25.432 "zone_management": false, 00:11:25.432 "zone_append": false, 00:11:25.432 "compare": false, 00:11:25.432 "compare_and_write": false, 00:11:25.432 "abort": true, 00:11:25.432 "seek_hole": false, 00:11:25.432 "seek_data": false, 00:11:25.432 "copy": true, 00:11:25.432 "nvme_iov_md": false 00:11:25.432 }, 00:11:25.432 "memory_domains": [ 00:11:25.432 { 00:11:25.432 "dma_device_id": "system", 00:11:25.432 "dma_device_type": 1 00:11:25.432 }, 00:11:25.432 { 00:11:25.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.432 "dma_device_type": 2 00:11:25.432 } 00:11:25.432 ], 00:11:25.432 "driver_specific": {} 00:11:25.432 } 00:11:25.432 ] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 BaseBdev4 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.432 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 [ 00:11:25.432 { 00:11:25.432 "name": "BaseBdev4", 00:11:25.432 "aliases": [ 00:11:25.432 "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c" 00:11:25.432 ], 00:11:25.432 "product_name": "Malloc disk", 00:11:25.432 "block_size": 512, 00:11:25.432 "num_blocks": 65536, 00:11:25.432 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:25.432 "assigned_rate_limits": { 00:11:25.432 "rw_ios_per_sec": 0, 00:11:25.432 "rw_mbytes_per_sec": 0, 00:11:25.432 "r_mbytes_per_sec": 0, 00:11:25.432 "w_mbytes_per_sec": 0 00:11:25.432 }, 00:11:25.432 "claimed": false, 00:11:25.432 "zoned": false, 00:11:25.432 "supported_io_types": { 00:11:25.432 "read": true, 00:11:25.432 "write": true, 00:11:25.432 "unmap": true, 00:11:25.432 "flush": true, 00:11:25.432 "reset": true, 00:11:25.432 "nvme_admin": false, 00:11:25.432 "nvme_io": false, 00:11:25.432 "nvme_io_md": false, 00:11:25.432 "write_zeroes": true, 00:11:25.432 "zcopy": true, 00:11:25.432 "get_zone_info": false, 00:11:25.432 "zone_management": false, 00:11:25.432 "zone_append": false, 00:11:25.432 "compare": false, 00:11:25.432 "compare_and_write": false, 00:11:25.432 "abort": true, 00:11:25.432 "seek_hole": false, 00:11:25.432 "seek_data": false, 00:11:25.432 "copy": true, 00:11:25.432 "nvme_iov_md": false 00:11:25.432 }, 00:11:25.432 "memory_domains": [ 00:11:25.432 { 00:11:25.432 "dma_device_id": "system", 00:11:25.432 "dma_device_type": 1 00:11:25.432 }, 00:11:25.433 { 00:11:25.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.433 "dma_device_type": 2 00:11:25.433 } 00:11:25.433 ], 00:11:25.433 "driver_specific": {} 00:11:25.433 } 00:11:25.433 ] 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.433 [2024-11-15 11:23:08.357040] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.433 [2024-11-15 11:23:08.357270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.433 [2024-11-15 11:23:08.357420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.433 [2024-11-15 11:23:08.360091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.433 [2024-11-15 11:23:08.360347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.433 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.692 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.692 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.692 "name": "Existed_Raid", 00:11:25.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.692 "strip_size_kb": 64, 00:11:25.692 "state": "configuring", 00:11:25.692 "raid_level": "raid0", 00:11:25.692 "superblock": false, 00:11:25.692 "num_base_bdevs": 4, 00:11:25.692 "num_base_bdevs_discovered": 3, 00:11:25.692 "num_base_bdevs_operational": 4, 00:11:25.692 "base_bdevs_list": [ 00:11:25.692 { 00:11:25.692 "name": "BaseBdev1", 00:11:25.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.692 "is_configured": false, 00:11:25.692 "data_offset": 0, 00:11:25.692 "data_size": 0 00:11:25.692 }, 00:11:25.692 { 00:11:25.692 "name": "BaseBdev2", 00:11:25.692 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:25.692 "is_configured": true, 00:11:25.692 "data_offset": 0, 00:11:25.692 "data_size": 65536 00:11:25.692 }, 00:11:25.692 { 00:11:25.692 "name": "BaseBdev3", 00:11:25.692 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:25.692 "is_configured": true, 00:11:25.692 "data_offset": 0, 00:11:25.692 "data_size": 65536 00:11:25.692 }, 00:11:25.692 { 00:11:25.692 "name": "BaseBdev4", 00:11:25.692 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:25.692 "is_configured": true, 00:11:25.692 "data_offset": 0, 00:11:25.692 "data_size": 65536 00:11:25.692 } 00:11:25.692 ] 00:11:25.692 }' 00:11:25.692 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.692 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 [2024-11-15 11:23:08.881299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.210 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.210 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.210 "name": "Existed_Raid", 00:11:26.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.210 "strip_size_kb": 64, 00:11:26.210 "state": "configuring", 00:11:26.210 "raid_level": "raid0", 00:11:26.210 "superblock": false, 00:11:26.210 "num_base_bdevs": 4, 00:11:26.210 "num_base_bdevs_discovered": 2, 00:11:26.210 "num_base_bdevs_operational": 4, 00:11:26.210 "base_bdevs_list": [ 00:11:26.210 { 00:11:26.210 "name": "BaseBdev1", 00:11:26.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.210 "is_configured": false, 00:11:26.210 "data_offset": 0, 00:11:26.210 "data_size": 0 00:11:26.210 }, 00:11:26.210 { 00:11:26.210 "name": null, 00:11:26.210 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:26.210 "is_configured": false, 00:11:26.210 "data_offset": 0, 00:11:26.210 "data_size": 65536 00:11:26.210 }, 00:11:26.210 { 00:11:26.210 "name": "BaseBdev3", 00:11:26.210 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:26.210 "is_configured": true, 00:11:26.210 "data_offset": 0, 00:11:26.210 "data_size": 65536 00:11:26.210 }, 00:11:26.210 { 00:11:26.210 "name": "BaseBdev4", 00:11:26.210 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:26.210 "is_configured": true, 00:11:26.210 "data_offset": 0, 00:11:26.210 "data_size": 65536 00:11:26.210 } 00:11:26.210 ] 00:11:26.210 }' 00:11:26.210 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.210 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:26.470 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.470 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.470 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.749 [2024-11-15 11:23:09.494617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.749 BaseBdev1 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.749 [ 00:11:26.749 { 00:11:26.749 "name": "BaseBdev1", 00:11:26.749 "aliases": [ 00:11:26.749 "30f17acd-b98a-4c53-b46a-6597367ac463" 00:11:26.749 ], 00:11:26.749 "product_name": "Malloc disk", 00:11:26.749 "block_size": 512, 00:11:26.749 "num_blocks": 65536, 00:11:26.749 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:26.749 "assigned_rate_limits": { 00:11:26.749 "rw_ios_per_sec": 0, 00:11:26.749 "rw_mbytes_per_sec": 0, 00:11:26.749 "r_mbytes_per_sec": 0, 00:11:26.749 "w_mbytes_per_sec": 0 00:11:26.749 }, 00:11:26.749 "claimed": true, 00:11:26.749 "claim_type": "exclusive_write", 00:11:26.749 "zoned": false, 00:11:26.749 "supported_io_types": { 00:11:26.749 "read": true, 00:11:26.749 "write": true, 00:11:26.749 "unmap": true, 00:11:26.749 "flush": true, 00:11:26.749 "reset": true, 00:11:26.749 "nvme_admin": false, 00:11:26.749 "nvme_io": false, 00:11:26.749 "nvme_io_md": false, 00:11:26.749 "write_zeroes": true, 00:11:26.749 "zcopy": true, 00:11:26.749 "get_zone_info": false, 00:11:26.749 "zone_management": false, 00:11:26.749 "zone_append": false, 00:11:26.749 "compare": false, 00:11:26.749 "compare_and_write": false, 00:11:26.749 "abort": true, 00:11:26.749 "seek_hole": false, 00:11:26.749 "seek_data": false, 00:11:26.749 "copy": true, 00:11:26.749 "nvme_iov_md": false 00:11:26.749 }, 00:11:26.749 "memory_domains": [ 00:11:26.749 { 00:11:26.749 "dma_device_id": "system", 00:11:26.749 "dma_device_type": 1 00:11:26.749 }, 00:11:26.749 { 00:11:26.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.749 "dma_device_type": 2 00:11:26.749 } 00:11:26.749 ], 00:11:26.749 "driver_specific": {} 00:11:26.749 } 00:11:26.749 ] 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.749 "name": "Existed_Raid", 00:11:26.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.749 "strip_size_kb": 64, 00:11:26.749 "state": "configuring", 00:11:26.749 "raid_level": "raid0", 00:11:26.749 "superblock": false, 00:11:26.749 "num_base_bdevs": 4, 00:11:26.749 "num_base_bdevs_discovered": 3, 00:11:26.749 "num_base_bdevs_operational": 4, 00:11:26.749 "base_bdevs_list": [ 00:11:26.749 { 00:11:26.749 "name": "BaseBdev1", 00:11:26.749 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:26.749 "is_configured": true, 00:11:26.749 "data_offset": 0, 00:11:26.749 "data_size": 65536 00:11:26.749 }, 00:11:26.749 { 00:11:26.749 "name": null, 00:11:26.749 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:26.749 "is_configured": false, 00:11:26.749 "data_offset": 0, 00:11:26.749 "data_size": 65536 00:11:26.749 }, 00:11:26.749 { 00:11:26.749 "name": "BaseBdev3", 00:11:26.749 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:26.749 "is_configured": true, 00:11:26.749 "data_offset": 0, 00:11:26.749 "data_size": 65536 00:11:26.749 }, 00:11:26.749 { 00:11:26.749 "name": "BaseBdev4", 00:11:26.749 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:26.749 "is_configured": true, 00:11:26.749 "data_offset": 0, 00:11:26.749 "data_size": 65536 00:11:26.749 } 00:11:26.749 ] 00:11:26.749 }' 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.749 11:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.324 [2024-11-15 11:23:10.094866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.324 "name": "Existed_Raid", 00:11:27.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.324 "strip_size_kb": 64, 00:11:27.324 "state": "configuring", 00:11:27.324 "raid_level": "raid0", 00:11:27.324 "superblock": false, 00:11:27.324 "num_base_bdevs": 4, 00:11:27.324 "num_base_bdevs_discovered": 2, 00:11:27.324 "num_base_bdevs_operational": 4, 00:11:27.324 "base_bdevs_list": [ 00:11:27.324 { 00:11:27.324 "name": "BaseBdev1", 00:11:27.324 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:27.324 "is_configured": true, 00:11:27.324 "data_offset": 0, 00:11:27.324 "data_size": 65536 00:11:27.324 }, 00:11:27.324 { 00:11:27.324 "name": null, 00:11:27.324 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:27.324 "is_configured": false, 00:11:27.324 "data_offset": 0, 00:11:27.324 "data_size": 65536 00:11:27.324 }, 00:11:27.324 { 00:11:27.324 "name": null, 00:11:27.324 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:27.324 "is_configured": false, 00:11:27.324 "data_offset": 0, 00:11:27.324 "data_size": 65536 00:11:27.324 }, 00:11:27.324 { 00:11:27.324 "name": "BaseBdev4", 00:11:27.324 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:27.324 "is_configured": true, 00:11:27.324 "data_offset": 0, 00:11:27.324 "data_size": 65536 00:11:27.324 } 00:11:27.324 ] 00:11:27.324 }' 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.324 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.891 [2024-11-15 11:23:10.671007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.891 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.891 "name": "Existed_Raid", 00:11:27.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.891 "strip_size_kb": 64, 00:11:27.891 "state": "configuring", 00:11:27.891 "raid_level": "raid0", 00:11:27.891 "superblock": false, 00:11:27.891 "num_base_bdevs": 4, 00:11:27.891 "num_base_bdevs_discovered": 3, 00:11:27.891 "num_base_bdevs_operational": 4, 00:11:27.891 "base_bdevs_list": [ 00:11:27.891 { 00:11:27.891 "name": "BaseBdev1", 00:11:27.891 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:27.891 "is_configured": true, 00:11:27.891 "data_offset": 0, 00:11:27.891 "data_size": 65536 00:11:27.891 }, 00:11:27.891 { 00:11:27.891 "name": null, 00:11:27.891 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:27.891 "is_configured": false, 00:11:27.891 "data_offset": 0, 00:11:27.891 "data_size": 65536 00:11:27.891 }, 00:11:27.891 { 00:11:27.891 "name": "BaseBdev3", 00:11:27.891 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:27.892 "is_configured": true, 00:11:27.892 "data_offset": 0, 00:11:27.892 "data_size": 65536 00:11:27.892 }, 00:11:27.892 { 00:11:27.892 "name": "BaseBdev4", 00:11:27.892 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:27.892 "is_configured": true, 00:11:27.892 "data_offset": 0, 00:11:27.892 "data_size": 65536 00:11:27.892 } 00:11:27.892 ] 00:11:27.892 }' 00:11:27.892 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.892 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.460 [2024-11-15 11:23:11.259257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.460 "name": "Existed_Raid", 00:11:28.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.460 "strip_size_kb": 64, 00:11:28.460 "state": "configuring", 00:11:28.460 "raid_level": "raid0", 00:11:28.460 "superblock": false, 00:11:28.460 "num_base_bdevs": 4, 00:11:28.460 "num_base_bdevs_discovered": 2, 00:11:28.460 "num_base_bdevs_operational": 4, 00:11:28.460 "base_bdevs_list": [ 00:11:28.460 { 00:11:28.460 "name": null, 00:11:28.460 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:28.460 "is_configured": false, 00:11:28.460 "data_offset": 0, 00:11:28.460 "data_size": 65536 00:11:28.460 }, 00:11:28.460 { 00:11:28.460 "name": null, 00:11:28.460 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:28.460 "is_configured": false, 00:11:28.460 "data_offset": 0, 00:11:28.460 "data_size": 65536 00:11:28.460 }, 00:11:28.460 { 00:11:28.460 "name": "BaseBdev3", 00:11:28.460 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:28.460 "is_configured": true, 00:11:28.460 "data_offset": 0, 00:11:28.460 "data_size": 65536 00:11:28.460 }, 00:11:28.460 { 00:11:28.460 "name": "BaseBdev4", 00:11:28.460 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:28.460 "is_configured": true, 00:11:28.460 "data_offset": 0, 00:11:28.460 "data_size": 65536 00:11:28.460 } 00:11:28.460 ] 00:11:28.460 }' 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.460 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.028 [2024-11-15 11:23:11.924667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.028 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.286 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.286 "name": "Existed_Raid", 00:11:29.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.286 "strip_size_kb": 64, 00:11:29.286 "state": "configuring", 00:11:29.286 "raid_level": "raid0", 00:11:29.286 "superblock": false, 00:11:29.286 "num_base_bdevs": 4, 00:11:29.286 "num_base_bdevs_discovered": 3, 00:11:29.286 "num_base_bdevs_operational": 4, 00:11:29.286 "base_bdevs_list": [ 00:11:29.286 { 00:11:29.286 "name": null, 00:11:29.286 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:29.286 "is_configured": false, 00:11:29.286 "data_offset": 0, 00:11:29.286 "data_size": 65536 00:11:29.286 }, 00:11:29.286 { 00:11:29.286 "name": "BaseBdev2", 00:11:29.286 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:29.286 "is_configured": true, 00:11:29.286 "data_offset": 0, 00:11:29.286 "data_size": 65536 00:11:29.286 }, 00:11:29.286 { 00:11:29.286 "name": "BaseBdev3", 00:11:29.286 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:29.286 "is_configured": true, 00:11:29.286 "data_offset": 0, 00:11:29.287 "data_size": 65536 00:11:29.287 }, 00:11:29.287 { 00:11:29.287 "name": "BaseBdev4", 00:11:29.287 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:29.287 "is_configured": true, 00:11:29.287 "data_offset": 0, 00:11:29.287 "data_size": 65536 00:11:29.287 } 00:11:29.287 ] 00:11:29.287 }' 00:11:29.287 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.287 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.545 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.545 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.545 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.545 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 30f17acd-b98a-4c53-b46a-6597367ac463 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.804 [2024-11-15 11:23:12.605998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:29.804 [2024-11-15 11:23:12.606294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:29.804 [2024-11-15 11:23:12.606320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:29.804 [2024-11-15 11:23:12.606711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:29.804 [2024-11-15 11:23:12.606924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:29.804 [2024-11-15 11:23:12.606944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:29.804 [2024-11-15 11:23:12.607445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.804 NewBaseBdev 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.804 [ 00:11:29.804 { 00:11:29.804 "name": "NewBaseBdev", 00:11:29.804 "aliases": [ 00:11:29.804 "30f17acd-b98a-4c53-b46a-6597367ac463" 00:11:29.804 ], 00:11:29.804 "product_name": "Malloc disk", 00:11:29.804 "block_size": 512, 00:11:29.804 "num_blocks": 65536, 00:11:29.804 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:29.804 "assigned_rate_limits": { 00:11:29.804 "rw_ios_per_sec": 0, 00:11:29.804 "rw_mbytes_per_sec": 0, 00:11:29.804 "r_mbytes_per_sec": 0, 00:11:29.804 "w_mbytes_per_sec": 0 00:11:29.804 }, 00:11:29.804 "claimed": true, 00:11:29.804 "claim_type": "exclusive_write", 00:11:29.804 "zoned": false, 00:11:29.804 "supported_io_types": { 00:11:29.804 "read": true, 00:11:29.804 "write": true, 00:11:29.804 "unmap": true, 00:11:29.804 "flush": true, 00:11:29.804 "reset": true, 00:11:29.804 "nvme_admin": false, 00:11:29.804 "nvme_io": false, 00:11:29.804 "nvme_io_md": false, 00:11:29.804 "write_zeroes": true, 00:11:29.804 "zcopy": true, 00:11:29.804 "get_zone_info": false, 00:11:29.804 "zone_management": false, 00:11:29.804 "zone_append": false, 00:11:29.804 "compare": false, 00:11:29.804 "compare_and_write": false, 00:11:29.804 "abort": true, 00:11:29.804 "seek_hole": false, 00:11:29.804 "seek_data": false, 00:11:29.804 "copy": true, 00:11:29.804 "nvme_iov_md": false 00:11:29.804 }, 00:11:29.804 "memory_domains": [ 00:11:29.804 { 00:11:29.804 "dma_device_id": "system", 00:11:29.804 "dma_device_type": 1 00:11:29.804 }, 00:11:29.804 { 00:11:29.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.804 "dma_device_type": 2 00:11:29.804 } 00:11:29.804 ], 00:11:29.804 "driver_specific": {} 00:11:29.804 } 00:11:29.804 ] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.804 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.804 "name": "Existed_Raid", 00:11:29.804 "uuid": "5d83a7bd-055b-457a-a9d1-85e4d85026c4", 00:11:29.804 "strip_size_kb": 64, 00:11:29.804 "state": "online", 00:11:29.804 "raid_level": "raid0", 00:11:29.804 "superblock": false, 00:11:29.804 "num_base_bdevs": 4, 00:11:29.804 "num_base_bdevs_discovered": 4, 00:11:29.804 "num_base_bdevs_operational": 4, 00:11:29.804 "base_bdevs_list": [ 00:11:29.804 { 00:11:29.804 "name": "NewBaseBdev", 00:11:29.804 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:29.804 "is_configured": true, 00:11:29.804 "data_offset": 0, 00:11:29.804 "data_size": 65536 00:11:29.804 }, 00:11:29.804 { 00:11:29.804 "name": "BaseBdev2", 00:11:29.804 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:29.804 "is_configured": true, 00:11:29.804 "data_offset": 0, 00:11:29.804 "data_size": 65536 00:11:29.804 }, 00:11:29.804 { 00:11:29.804 "name": "BaseBdev3", 00:11:29.804 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:29.804 "is_configured": true, 00:11:29.804 "data_offset": 0, 00:11:29.804 "data_size": 65536 00:11:29.804 }, 00:11:29.804 { 00:11:29.804 "name": "BaseBdev4", 00:11:29.804 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:29.804 "is_configured": true, 00:11:29.804 "data_offset": 0, 00:11:29.805 "data_size": 65536 00:11:29.805 } 00:11:29.805 ] 00:11:29.805 }' 00:11:29.805 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.805 11:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.371 [2024-11-15 11:23:13.158725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.371 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.371 "name": "Existed_Raid", 00:11:30.371 "aliases": [ 00:11:30.371 "5d83a7bd-055b-457a-a9d1-85e4d85026c4" 00:11:30.371 ], 00:11:30.371 "product_name": "Raid Volume", 00:11:30.371 "block_size": 512, 00:11:30.371 "num_blocks": 262144, 00:11:30.371 "uuid": "5d83a7bd-055b-457a-a9d1-85e4d85026c4", 00:11:30.372 "assigned_rate_limits": { 00:11:30.372 "rw_ios_per_sec": 0, 00:11:30.372 "rw_mbytes_per_sec": 0, 00:11:30.372 "r_mbytes_per_sec": 0, 00:11:30.372 "w_mbytes_per_sec": 0 00:11:30.372 }, 00:11:30.372 "claimed": false, 00:11:30.372 "zoned": false, 00:11:30.372 "supported_io_types": { 00:11:30.372 "read": true, 00:11:30.372 "write": true, 00:11:30.372 "unmap": true, 00:11:30.372 "flush": true, 00:11:30.372 "reset": true, 00:11:30.372 "nvme_admin": false, 00:11:30.372 "nvme_io": false, 00:11:30.372 "nvme_io_md": false, 00:11:30.372 "write_zeroes": true, 00:11:30.372 "zcopy": false, 00:11:30.372 "get_zone_info": false, 00:11:30.372 "zone_management": false, 00:11:30.372 "zone_append": false, 00:11:30.372 "compare": false, 00:11:30.372 "compare_and_write": false, 00:11:30.372 "abort": false, 00:11:30.372 "seek_hole": false, 00:11:30.372 "seek_data": false, 00:11:30.372 "copy": false, 00:11:30.372 "nvme_iov_md": false 00:11:30.372 }, 00:11:30.372 "memory_domains": [ 00:11:30.372 { 00:11:30.372 "dma_device_id": "system", 00:11:30.372 "dma_device_type": 1 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.372 "dma_device_type": 2 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "dma_device_id": "system", 00:11:30.372 "dma_device_type": 1 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.372 "dma_device_type": 2 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "dma_device_id": "system", 00:11:30.372 "dma_device_type": 1 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.372 "dma_device_type": 2 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "dma_device_id": "system", 00:11:30.372 "dma_device_type": 1 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.372 "dma_device_type": 2 00:11:30.372 } 00:11:30.372 ], 00:11:30.372 "driver_specific": { 00:11:30.372 "raid": { 00:11:30.372 "uuid": "5d83a7bd-055b-457a-a9d1-85e4d85026c4", 00:11:30.372 "strip_size_kb": 64, 00:11:30.372 "state": "online", 00:11:30.372 "raid_level": "raid0", 00:11:30.372 "superblock": false, 00:11:30.372 "num_base_bdevs": 4, 00:11:30.372 "num_base_bdevs_discovered": 4, 00:11:30.372 "num_base_bdevs_operational": 4, 00:11:30.372 "base_bdevs_list": [ 00:11:30.372 { 00:11:30.372 "name": "NewBaseBdev", 00:11:30.372 "uuid": "30f17acd-b98a-4c53-b46a-6597367ac463", 00:11:30.372 "is_configured": true, 00:11:30.372 "data_offset": 0, 00:11:30.372 "data_size": 65536 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "name": "BaseBdev2", 00:11:30.372 "uuid": "8fb97046-5199-4988-a0c7-114b048e269a", 00:11:30.372 "is_configured": true, 00:11:30.372 "data_offset": 0, 00:11:30.372 "data_size": 65536 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "name": "BaseBdev3", 00:11:30.372 "uuid": "abe7aea3-a4b0-46f2-8d6f-59033f80ce61", 00:11:30.372 "is_configured": true, 00:11:30.372 "data_offset": 0, 00:11:30.372 "data_size": 65536 00:11:30.372 }, 00:11:30.372 { 00:11:30.372 "name": "BaseBdev4", 00:11:30.372 "uuid": "d12ef8ab-8f56-4d7a-8d4b-daed1ad4df0c", 00:11:30.372 "is_configured": true, 00:11:30.372 "data_offset": 0, 00:11:30.372 "data_size": 65536 00:11:30.372 } 00:11:30.372 ] 00:11:30.372 } 00:11:30.372 } 00:11:30.372 }' 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.372 BaseBdev2 00:11:30.372 BaseBdev3 00:11:30.372 BaseBdev4' 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.372 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.630 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.631 [2024-11-15 11:23:13.530286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.631 [2024-11-15 11:23:13.530533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.631 [2024-11-15 11:23:13.530745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.631 [2024-11-15 11:23:13.530851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.631 [2024-11-15 11:23:13.530890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69335 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69335 ']' 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69335 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69335 00:11:30.631 killing process with pid 69335 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69335' 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69335 00:11:30.631 [2024-11-15 11:23:13.568670] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.631 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69335 00:11:31.198 [2024-11-15 11:23:13.909035] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.134 ************************************ 00:11:32.134 END TEST raid_state_function_test 00:11:32.134 ************************************ 00:11:32.134 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.134 00:11:32.134 real 0m12.980s 00:11:32.134 user 0m21.473s 00:11:32.134 sys 0m1.846s 00:11:32.134 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.134 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.134 11:23:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:32.134 11:23:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:32.134 11:23:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:32.134 11:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.134 ************************************ 00:11:32.134 START TEST raid_state_function_test_sb 00:11:32.134 ************************************ 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:32.134 Process raid pid: 70023 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70023 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70023' 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70023 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70023 ']' 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:32.134 11:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.393 [2024-11-15 11:23:15.160554] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:32.393 [2024-11-15 11:23:15.161070] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.651 [2024-11-15 11:23:15.349612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.651 [2024-11-15 11:23:15.478035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.909 [2024-11-15 11:23:15.692718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.909 [2024-11-15 11:23:15.692754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.167 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:33.167 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:33.167 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.167 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.167 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.167 [2024-11-15 11:23:16.113303] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.167 [2024-11-15 11:23:16.113536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.167 [2024-11-15 11:23:16.113681] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.167 [2024-11-15 11:23:16.113744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.167 [2024-11-15 11:23:16.113861] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.167 [2024-11-15 11:23:16.113927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.167 [2024-11-15 11:23:16.114089] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.167 [2024-11-15 11:23:16.114155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.425 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.425 "name": "Existed_Raid", 00:11:33.425 "uuid": "c94d4b9b-0b2e-40b8-a13c-0346a7feca0b", 00:11:33.425 "strip_size_kb": 64, 00:11:33.425 "state": "configuring", 00:11:33.425 "raid_level": "raid0", 00:11:33.425 "superblock": true, 00:11:33.425 "num_base_bdevs": 4, 00:11:33.425 "num_base_bdevs_discovered": 0, 00:11:33.425 "num_base_bdevs_operational": 4, 00:11:33.425 "base_bdevs_list": [ 00:11:33.425 { 00:11:33.425 "name": "BaseBdev1", 00:11:33.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.425 "is_configured": false, 00:11:33.426 "data_offset": 0, 00:11:33.426 "data_size": 0 00:11:33.426 }, 00:11:33.426 { 00:11:33.426 "name": "BaseBdev2", 00:11:33.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.426 "is_configured": false, 00:11:33.426 "data_offset": 0, 00:11:33.426 "data_size": 0 00:11:33.426 }, 00:11:33.426 { 00:11:33.426 "name": "BaseBdev3", 00:11:33.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.426 "is_configured": false, 00:11:33.426 "data_offset": 0, 00:11:33.426 "data_size": 0 00:11:33.426 }, 00:11:33.426 { 00:11:33.426 "name": "BaseBdev4", 00:11:33.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.426 "is_configured": false, 00:11:33.426 "data_offset": 0, 00:11:33.426 "data_size": 0 00:11:33.426 } 00:11:33.426 ] 00:11:33.426 }' 00:11:33.426 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.426 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.992 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.992 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 [2024-11-15 11:23:16.653397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.992 [2024-11-15 11:23:16.653448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:33.992 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.992 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.992 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.992 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 [2024-11-15 11:23:16.661381] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.993 [2024-11-15 11:23:16.661623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.993 [2024-11-15 11:23:16.661750] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.993 [2024-11-15 11:23:16.661886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.993 [2024-11-15 11:23:16.662000] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.993 [2024-11-15 11:23:16.662200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.993 [2024-11-15 11:23:16.662224] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.993 [2024-11-15 11:23:16.662249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.993 [2024-11-15 11:23:16.707979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.993 BaseBdev1 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.993 [ 00:11:33.993 { 00:11:33.993 "name": "BaseBdev1", 00:11:33.993 "aliases": [ 00:11:33.993 "5d8b8825-6463-4a2b-9ccc-4d08123a6488" 00:11:33.993 ], 00:11:33.993 "product_name": "Malloc disk", 00:11:33.993 "block_size": 512, 00:11:33.993 "num_blocks": 65536, 00:11:33.993 "uuid": "5d8b8825-6463-4a2b-9ccc-4d08123a6488", 00:11:33.993 "assigned_rate_limits": { 00:11:33.993 "rw_ios_per_sec": 0, 00:11:33.993 "rw_mbytes_per_sec": 0, 00:11:33.993 "r_mbytes_per_sec": 0, 00:11:33.993 "w_mbytes_per_sec": 0 00:11:33.993 }, 00:11:33.993 "claimed": true, 00:11:33.993 "claim_type": "exclusive_write", 00:11:33.993 "zoned": false, 00:11:33.993 "supported_io_types": { 00:11:33.993 "read": true, 00:11:33.993 "write": true, 00:11:33.993 "unmap": true, 00:11:33.993 "flush": true, 00:11:33.993 "reset": true, 00:11:33.993 "nvme_admin": false, 00:11:33.993 "nvme_io": false, 00:11:33.993 "nvme_io_md": false, 00:11:33.993 "write_zeroes": true, 00:11:33.993 "zcopy": true, 00:11:33.993 "get_zone_info": false, 00:11:33.993 "zone_management": false, 00:11:33.993 "zone_append": false, 00:11:33.993 "compare": false, 00:11:33.993 "compare_and_write": false, 00:11:33.993 "abort": true, 00:11:33.993 "seek_hole": false, 00:11:33.993 "seek_data": false, 00:11:33.993 "copy": true, 00:11:33.993 "nvme_iov_md": false 00:11:33.993 }, 00:11:33.993 "memory_domains": [ 00:11:33.993 { 00:11:33.993 "dma_device_id": "system", 00:11:33.993 "dma_device_type": 1 00:11:33.993 }, 00:11:33.993 { 00:11:33.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.993 "dma_device_type": 2 00:11:33.993 } 00:11:33.993 ], 00:11:33.993 "driver_specific": {} 00:11:33.993 } 00:11:33.993 ] 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.993 "name": "Existed_Raid", 00:11:33.993 "uuid": "8615730e-94c0-43e5-b9be-4218d4870208", 00:11:33.993 "strip_size_kb": 64, 00:11:33.993 "state": "configuring", 00:11:33.993 "raid_level": "raid0", 00:11:33.993 "superblock": true, 00:11:33.993 "num_base_bdevs": 4, 00:11:33.993 "num_base_bdevs_discovered": 1, 00:11:33.993 "num_base_bdevs_operational": 4, 00:11:33.993 "base_bdevs_list": [ 00:11:33.993 { 00:11:33.993 "name": "BaseBdev1", 00:11:33.993 "uuid": "5d8b8825-6463-4a2b-9ccc-4d08123a6488", 00:11:33.993 "is_configured": true, 00:11:33.993 "data_offset": 2048, 00:11:33.993 "data_size": 63488 00:11:33.993 }, 00:11:33.993 { 00:11:33.993 "name": "BaseBdev2", 00:11:33.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.993 "is_configured": false, 00:11:33.993 "data_offset": 0, 00:11:33.993 "data_size": 0 00:11:33.993 }, 00:11:33.993 { 00:11:33.993 "name": "BaseBdev3", 00:11:33.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.993 "is_configured": false, 00:11:33.993 "data_offset": 0, 00:11:33.993 "data_size": 0 00:11:33.993 }, 00:11:33.993 { 00:11:33.993 "name": "BaseBdev4", 00:11:33.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.993 "is_configured": false, 00:11:33.993 "data_offset": 0, 00:11:33.993 "data_size": 0 00:11:33.993 } 00:11:33.993 ] 00:11:33.993 }' 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.993 11:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.561 [2024-11-15 11:23:17.256194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.561 [2024-11-15 11:23:17.256296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.561 [2024-11-15 11:23:17.264312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.561 [2024-11-15 11:23:17.267159] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.561 [2024-11-15 11:23:17.267461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.561 [2024-11-15 11:23:17.267605] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.561 [2024-11-15 11:23:17.267675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.561 [2024-11-15 11:23:17.267801] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.561 [2024-11-15 11:23:17.267834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.561 "name": "Existed_Raid", 00:11:34.561 "uuid": "a31d0dac-3971-475d-b3cb-a92ec3307db4", 00:11:34.561 "strip_size_kb": 64, 00:11:34.561 "state": "configuring", 00:11:34.561 "raid_level": "raid0", 00:11:34.561 "superblock": true, 00:11:34.561 "num_base_bdevs": 4, 00:11:34.561 "num_base_bdevs_discovered": 1, 00:11:34.561 "num_base_bdevs_operational": 4, 00:11:34.561 "base_bdevs_list": [ 00:11:34.561 { 00:11:34.561 "name": "BaseBdev1", 00:11:34.561 "uuid": "5d8b8825-6463-4a2b-9ccc-4d08123a6488", 00:11:34.561 "is_configured": true, 00:11:34.561 "data_offset": 2048, 00:11:34.561 "data_size": 63488 00:11:34.561 }, 00:11:34.561 { 00:11:34.561 "name": "BaseBdev2", 00:11:34.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.561 "is_configured": false, 00:11:34.561 "data_offset": 0, 00:11:34.561 "data_size": 0 00:11:34.561 }, 00:11:34.561 { 00:11:34.561 "name": "BaseBdev3", 00:11:34.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.561 "is_configured": false, 00:11:34.561 "data_offset": 0, 00:11:34.561 "data_size": 0 00:11:34.561 }, 00:11:34.561 { 00:11:34.561 "name": "BaseBdev4", 00:11:34.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.561 "is_configured": false, 00:11:34.561 "data_offset": 0, 00:11:34.561 "data_size": 0 00:11:34.561 } 00:11:34.561 ] 00:11:34.561 }' 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.561 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.127 BaseBdev2 00:11:35.127 [2024-11-15 11:23:17.821587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.127 [ 00:11:35.127 { 00:11:35.127 "name": "BaseBdev2", 00:11:35.127 "aliases": [ 00:11:35.127 "047ce586-faf6-41a9-aa82-8b2f9fc79baf" 00:11:35.127 ], 00:11:35.127 "product_name": "Malloc disk", 00:11:35.127 "block_size": 512, 00:11:35.127 "num_blocks": 65536, 00:11:35.127 "uuid": "047ce586-faf6-41a9-aa82-8b2f9fc79baf", 00:11:35.127 "assigned_rate_limits": { 00:11:35.127 "rw_ios_per_sec": 0, 00:11:35.127 "rw_mbytes_per_sec": 0, 00:11:35.127 "r_mbytes_per_sec": 0, 00:11:35.127 "w_mbytes_per_sec": 0 00:11:35.127 }, 00:11:35.127 "claimed": true, 00:11:35.127 "claim_type": "exclusive_write", 00:11:35.127 "zoned": false, 00:11:35.127 "supported_io_types": { 00:11:35.127 "read": true, 00:11:35.127 "write": true, 00:11:35.127 "unmap": true, 00:11:35.127 "flush": true, 00:11:35.127 "reset": true, 00:11:35.127 "nvme_admin": false, 00:11:35.127 "nvme_io": false, 00:11:35.127 "nvme_io_md": false, 00:11:35.127 "write_zeroes": true, 00:11:35.127 "zcopy": true, 00:11:35.127 "get_zone_info": false, 00:11:35.127 "zone_management": false, 00:11:35.127 "zone_append": false, 00:11:35.127 "compare": false, 00:11:35.127 "compare_and_write": false, 00:11:35.127 "abort": true, 00:11:35.127 "seek_hole": false, 00:11:35.127 "seek_data": false, 00:11:35.127 "copy": true, 00:11:35.127 "nvme_iov_md": false 00:11:35.127 }, 00:11:35.127 "memory_domains": [ 00:11:35.127 { 00:11:35.127 "dma_device_id": "system", 00:11:35.127 "dma_device_type": 1 00:11:35.127 }, 00:11:35.127 { 00:11:35.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.127 "dma_device_type": 2 00:11:35.127 } 00:11:35.127 ], 00:11:35.127 "driver_specific": {} 00:11:35.127 } 00:11:35.127 ] 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.127 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.127 "name": "Existed_Raid", 00:11:35.127 "uuid": "a31d0dac-3971-475d-b3cb-a92ec3307db4", 00:11:35.127 "strip_size_kb": 64, 00:11:35.127 "state": "configuring", 00:11:35.127 "raid_level": "raid0", 00:11:35.127 "superblock": true, 00:11:35.127 "num_base_bdevs": 4, 00:11:35.127 "num_base_bdevs_discovered": 2, 00:11:35.127 "num_base_bdevs_operational": 4, 00:11:35.128 "base_bdevs_list": [ 00:11:35.128 { 00:11:35.128 "name": "BaseBdev1", 00:11:35.128 "uuid": "5d8b8825-6463-4a2b-9ccc-4d08123a6488", 00:11:35.128 "is_configured": true, 00:11:35.128 "data_offset": 2048, 00:11:35.128 "data_size": 63488 00:11:35.128 }, 00:11:35.128 { 00:11:35.128 "name": "BaseBdev2", 00:11:35.128 "uuid": "047ce586-faf6-41a9-aa82-8b2f9fc79baf", 00:11:35.128 "is_configured": true, 00:11:35.128 "data_offset": 2048, 00:11:35.128 "data_size": 63488 00:11:35.128 }, 00:11:35.128 { 00:11:35.128 "name": "BaseBdev3", 00:11:35.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.128 "is_configured": false, 00:11:35.128 "data_offset": 0, 00:11:35.128 "data_size": 0 00:11:35.128 }, 00:11:35.128 { 00:11:35.128 "name": "BaseBdev4", 00:11:35.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.128 "is_configured": false, 00:11:35.128 "data_offset": 0, 00:11:35.128 "data_size": 0 00:11:35.128 } 00:11:35.128 ] 00:11:35.128 }' 00:11:35.128 11:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.128 11:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.701 [2024-11-15 11:23:18.405911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.701 BaseBdev3 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.701 [ 00:11:35.701 { 00:11:35.701 "name": "BaseBdev3", 00:11:35.701 "aliases": [ 00:11:35.701 "38634648-c105-45b0-8533-1bc404d9301c" 00:11:35.701 ], 00:11:35.701 "product_name": "Malloc disk", 00:11:35.701 "block_size": 512, 00:11:35.701 "num_blocks": 65536, 00:11:35.701 "uuid": "38634648-c105-45b0-8533-1bc404d9301c", 00:11:35.701 "assigned_rate_limits": { 00:11:35.701 "rw_ios_per_sec": 0, 00:11:35.701 "rw_mbytes_per_sec": 0, 00:11:35.701 "r_mbytes_per_sec": 0, 00:11:35.701 "w_mbytes_per_sec": 0 00:11:35.701 }, 00:11:35.701 "claimed": true, 00:11:35.701 "claim_type": "exclusive_write", 00:11:35.701 "zoned": false, 00:11:35.701 "supported_io_types": { 00:11:35.701 "read": true, 00:11:35.701 "write": true, 00:11:35.701 "unmap": true, 00:11:35.701 "flush": true, 00:11:35.701 "reset": true, 00:11:35.701 "nvme_admin": false, 00:11:35.701 "nvme_io": false, 00:11:35.701 "nvme_io_md": false, 00:11:35.701 "write_zeroes": true, 00:11:35.701 "zcopy": true, 00:11:35.701 "get_zone_info": false, 00:11:35.701 "zone_management": false, 00:11:35.701 "zone_append": false, 00:11:35.701 "compare": false, 00:11:35.701 "compare_and_write": false, 00:11:35.701 "abort": true, 00:11:35.701 "seek_hole": false, 00:11:35.701 "seek_data": false, 00:11:35.701 "copy": true, 00:11:35.701 "nvme_iov_md": false 00:11:35.701 }, 00:11:35.701 "memory_domains": [ 00:11:35.701 { 00:11:35.701 "dma_device_id": "system", 00:11:35.701 "dma_device_type": 1 00:11:35.701 }, 00:11:35.701 { 00:11:35.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.701 "dma_device_type": 2 00:11:35.701 } 00:11:35.701 ], 00:11:35.701 "driver_specific": {} 00:11:35.701 } 00:11:35.701 ] 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.701 "name": "Existed_Raid", 00:11:35.701 "uuid": "a31d0dac-3971-475d-b3cb-a92ec3307db4", 00:11:35.701 "strip_size_kb": 64, 00:11:35.701 "state": "configuring", 00:11:35.701 "raid_level": "raid0", 00:11:35.701 "superblock": true, 00:11:35.701 "num_base_bdevs": 4, 00:11:35.701 "num_base_bdevs_discovered": 3, 00:11:35.701 "num_base_bdevs_operational": 4, 00:11:35.701 "base_bdevs_list": [ 00:11:35.701 { 00:11:35.701 "name": "BaseBdev1", 00:11:35.701 "uuid": "5d8b8825-6463-4a2b-9ccc-4d08123a6488", 00:11:35.701 "is_configured": true, 00:11:35.701 "data_offset": 2048, 00:11:35.701 "data_size": 63488 00:11:35.701 }, 00:11:35.701 { 00:11:35.701 "name": "BaseBdev2", 00:11:35.701 "uuid": "047ce586-faf6-41a9-aa82-8b2f9fc79baf", 00:11:35.701 "is_configured": true, 00:11:35.701 "data_offset": 2048, 00:11:35.701 "data_size": 63488 00:11:35.701 }, 00:11:35.701 { 00:11:35.701 "name": "BaseBdev3", 00:11:35.701 "uuid": "38634648-c105-45b0-8533-1bc404d9301c", 00:11:35.701 "is_configured": true, 00:11:35.701 "data_offset": 2048, 00:11:35.701 "data_size": 63488 00:11:35.701 }, 00:11:35.701 { 00:11:35.701 "name": "BaseBdev4", 00:11:35.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.701 "is_configured": false, 00:11:35.701 "data_offset": 0, 00:11:35.701 "data_size": 0 00:11:35.701 } 00:11:35.701 ] 00:11:35.701 }' 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.701 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.312 BaseBdev4 00:11:36.312 [2024-11-15 11:23:18.995307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.312 [2024-11-15 11:23:18.995667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.312 [2024-11-15 11:23:18.995687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:36.312 [2024-11-15 11:23:18.996021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.312 [2024-11-15 11:23:18.996263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.312 [2024-11-15 11:23:18.996287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.312 [2024-11-15 11:23:18.996477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.312 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.313 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.313 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.313 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.313 11:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.313 [ 00:11:36.313 { 00:11:36.313 "name": "BaseBdev4", 00:11:36.313 "aliases": [ 00:11:36.313 "693031f3-4b7c-4e74-aa51-c9a8fa38d15d" 00:11:36.313 ], 00:11:36.313 "product_name": "Malloc disk", 00:11:36.313 "block_size": 512, 00:11:36.313 "num_blocks": 65536, 00:11:36.313 "uuid": "693031f3-4b7c-4e74-aa51-c9a8fa38d15d", 00:11:36.313 "assigned_rate_limits": { 00:11:36.313 "rw_ios_per_sec": 0, 00:11:36.313 "rw_mbytes_per_sec": 0, 00:11:36.313 "r_mbytes_per_sec": 0, 00:11:36.313 "w_mbytes_per_sec": 0 00:11:36.313 }, 00:11:36.313 "claimed": true, 00:11:36.313 "claim_type": "exclusive_write", 00:11:36.313 "zoned": false, 00:11:36.313 "supported_io_types": { 00:11:36.313 "read": true, 00:11:36.313 "write": true, 00:11:36.313 "unmap": true, 00:11:36.313 "flush": true, 00:11:36.313 "reset": true, 00:11:36.313 "nvme_admin": false, 00:11:36.313 "nvme_io": false, 00:11:36.313 "nvme_io_md": false, 00:11:36.313 "write_zeroes": true, 00:11:36.313 "zcopy": true, 00:11:36.313 "get_zone_info": false, 00:11:36.313 "zone_management": false, 00:11:36.313 "zone_append": false, 00:11:36.313 "compare": false, 00:11:36.313 "compare_and_write": false, 00:11:36.313 "abort": true, 00:11:36.313 "seek_hole": false, 00:11:36.313 "seek_data": false, 00:11:36.313 "copy": true, 00:11:36.313 "nvme_iov_md": false 00:11:36.313 }, 00:11:36.313 "memory_domains": [ 00:11:36.313 { 00:11:36.313 "dma_device_id": "system", 00:11:36.313 "dma_device_type": 1 00:11:36.313 }, 00:11:36.313 { 00:11:36.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.313 "dma_device_type": 2 00:11:36.313 } 00:11:36.313 ], 00:11:36.313 "driver_specific": {} 00:11:36.313 } 00:11:36.313 ] 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.313 "name": "Existed_Raid", 00:11:36.313 "uuid": "a31d0dac-3971-475d-b3cb-a92ec3307db4", 00:11:36.313 "strip_size_kb": 64, 00:11:36.313 "state": "online", 00:11:36.313 "raid_level": "raid0", 00:11:36.313 "superblock": true, 00:11:36.313 "num_base_bdevs": 4, 00:11:36.313 "num_base_bdevs_discovered": 4, 00:11:36.313 "num_base_bdevs_operational": 4, 00:11:36.313 "base_bdevs_list": [ 00:11:36.313 { 00:11:36.313 "name": "BaseBdev1", 00:11:36.313 "uuid": "5d8b8825-6463-4a2b-9ccc-4d08123a6488", 00:11:36.313 "is_configured": true, 00:11:36.313 "data_offset": 2048, 00:11:36.313 "data_size": 63488 00:11:36.313 }, 00:11:36.313 { 00:11:36.313 "name": "BaseBdev2", 00:11:36.313 "uuid": "047ce586-faf6-41a9-aa82-8b2f9fc79baf", 00:11:36.313 "is_configured": true, 00:11:36.313 "data_offset": 2048, 00:11:36.313 "data_size": 63488 00:11:36.313 }, 00:11:36.313 { 00:11:36.313 "name": "BaseBdev3", 00:11:36.313 "uuid": "38634648-c105-45b0-8533-1bc404d9301c", 00:11:36.313 "is_configured": true, 00:11:36.313 "data_offset": 2048, 00:11:36.313 "data_size": 63488 00:11:36.313 }, 00:11:36.313 { 00:11:36.313 "name": "BaseBdev4", 00:11:36.313 "uuid": "693031f3-4b7c-4e74-aa51-c9a8fa38d15d", 00:11:36.313 "is_configured": true, 00:11:36.313 "data_offset": 2048, 00:11:36.313 "data_size": 63488 00:11:36.313 } 00:11:36.313 ] 00:11:36.313 }' 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.313 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.880 [2024-11-15 11:23:19.543998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.880 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.880 "name": "Existed_Raid", 00:11:36.880 "aliases": [ 00:11:36.880 "a31d0dac-3971-475d-b3cb-a92ec3307db4" 00:11:36.880 ], 00:11:36.880 "product_name": "Raid Volume", 00:11:36.880 "block_size": 512, 00:11:36.880 "num_blocks": 253952, 00:11:36.880 "uuid": "a31d0dac-3971-475d-b3cb-a92ec3307db4", 00:11:36.880 "assigned_rate_limits": { 00:11:36.880 "rw_ios_per_sec": 0, 00:11:36.880 "rw_mbytes_per_sec": 0, 00:11:36.880 "r_mbytes_per_sec": 0, 00:11:36.880 "w_mbytes_per_sec": 0 00:11:36.880 }, 00:11:36.880 "claimed": false, 00:11:36.880 "zoned": false, 00:11:36.880 "supported_io_types": { 00:11:36.880 "read": true, 00:11:36.880 "write": true, 00:11:36.880 "unmap": true, 00:11:36.880 "flush": true, 00:11:36.880 "reset": true, 00:11:36.880 "nvme_admin": false, 00:11:36.880 "nvme_io": false, 00:11:36.880 "nvme_io_md": false, 00:11:36.880 "write_zeroes": true, 00:11:36.880 "zcopy": false, 00:11:36.880 "get_zone_info": false, 00:11:36.880 "zone_management": false, 00:11:36.880 "zone_append": false, 00:11:36.880 "compare": false, 00:11:36.880 "compare_and_write": false, 00:11:36.880 "abort": false, 00:11:36.880 "seek_hole": false, 00:11:36.880 "seek_data": false, 00:11:36.880 "copy": false, 00:11:36.880 "nvme_iov_md": false 00:11:36.880 }, 00:11:36.880 "memory_domains": [ 00:11:36.880 { 00:11:36.880 "dma_device_id": "system", 00:11:36.880 "dma_device_type": 1 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.880 "dma_device_type": 2 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "dma_device_id": "system", 00:11:36.880 "dma_device_type": 1 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.880 "dma_device_type": 2 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "dma_device_id": "system", 00:11:36.880 "dma_device_type": 1 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.880 "dma_device_type": 2 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "dma_device_id": "system", 00:11:36.880 "dma_device_type": 1 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.880 "dma_device_type": 2 00:11:36.880 } 00:11:36.880 ], 00:11:36.880 "driver_specific": { 00:11:36.880 "raid": { 00:11:36.880 "uuid": "a31d0dac-3971-475d-b3cb-a92ec3307db4", 00:11:36.880 "strip_size_kb": 64, 00:11:36.880 "state": "online", 00:11:36.880 "raid_level": "raid0", 00:11:36.880 "superblock": true, 00:11:36.880 "num_base_bdevs": 4, 00:11:36.880 "num_base_bdevs_discovered": 4, 00:11:36.880 "num_base_bdevs_operational": 4, 00:11:36.880 "base_bdevs_list": [ 00:11:36.880 { 00:11:36.880 "name": "BaseBdev1", 00:11:36.880 "uuid": "5d8b8825-6463-4a2b-9ccc-4d08123a6488", 00:11:36.880 "is_configured": true, 00:11:36.880 "data_offset": 2048, 00:11:36.880 "data_size": 63488 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "name": "BaseBdev2", 00:11:36.880 "uuid": "047ce586-faf6-41a9-aa82-8b2f9fc79baf", 00:11:36.880 "is_configured": true, 00:11:36.880 "data_offset": 2048, 00:11:36.880 "data_size": 63488 00:11:36.880 }, 00:11:36.880 { 00:11:36.880 "name": "BaseBdev3", 00:11:36.880 "uuid": "38634648-c105-45b0-8533-1bc404d9301c", 00:11:36.881 "is_configured": true, 00:11:36.881 "data_offset": 2048, 00:11:36.881 "data_size": 63488 00:11:36.881 }, 00:11:36.881 { 00:11:36.881 "name": "BaseBdev4", 00:11:36.881 "uuid": "693031f3-4b7c-4e74-aa51-c9a8fa38d15d", 00:11:36.881 "is_configured": true, 00:11:36.881 "data_offset": 2048, 00:11:36.881 "data_size": 63488 00:11:36.881 } 00:11:36.881 ] 00:11:36.881 } 00:11:36.881 } 00:11:36.881 }' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.881 BaseBdev2 00:11:36.881 BaseBdev3 00:11:36.881 BaseBdev4' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.881 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.139 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.139 [2024-11-15 11:23:19.907773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.139 [2024-11-15 11:23:19.907965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.139 [2024-11-15 11:23:19.908222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.140 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.140 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.140 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.140 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.140 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.140 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.140 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.140 "name": "Existed_Raid", 00:11:37.140 "uuid": "a31d0dac-3971-475d-b3cb-a92ec3307db4", 00:11:37.140 "strip_size_kb": 64, 00:11:37.140 "state": "offline", 00:11:37.140 "raid_level": "raid0", 00:11:37.140 "superblock": true, 00:11:37.140 "num_base_bdevs": 4, 00:11:37.140 "num_base_bdevs_discovered": 3, 00:11:37.140 "num_base_bdevs_operational": 3, 00:11:37.140 "base_bdevs_list": [ 00:11:37.140 { 00:11:37.140 "name": null, 00:11:37.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.140 "is_configured": false, 00:11:37.140 "data_offset": 0, 00:11:37.140 "data_size": 63488 00:11:37.140 }, 00:11:37.140 { 00:11:37.140 "name": "BaseBdev2", 00:11:37.140 "uuid": "047ce586-faf6-41a9-aa82-8b2f9fc79baf", 00:11:37.140 "is_configured": true, 00:11:37.140 "data_offset": 2048, 00:11:37.140 "data_size": 63488 00:11:37.140 }, 00:11:37.140 { 00:11:37.140 "name": "BaseBdev3", 00:11:37.140 "uuid": "38634648-c105-45b0-8533-1bc404d9301c", 00:11:37.140 "is_configured": true, 00:11:37.140 "data_offset": 2048, 00:11:37.140 "data_size": 63488 00:11:37.140 }, 00:11:37.140 { 00:11:37.140 "name": "BaseBdev4", 00:11:37.140 "uuid": "693031f3-4b7c-4e74-aa51-c9a8fa38d15d", 00:11:37.140 "is_configured": true, 00:11:37.140 "data_offset": 2048, 00:11:37.140 "data_size": 63488 00:11:37.140 } 00:11:37.140 ] 00:11:37.140 }' 00:11:37.140 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.140 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.707 [2024-11-15 11:23:20.573047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.707 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.966 [2024-11-15 11:23:20.711587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.966 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.966 [2024-11-15 11:23:20.857682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.966 [2024-11-15 11:23:20.857917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.225 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.225 BaseBdev2 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.225 [ 00:11:38.225 { 00:11:38.225 "name": "BaseBdev2", 00:11:38.225 "aliases": [ 00:11:38.225 "9bd256c8-48ae-4266-a7b2-51794b6dfbc2" 00:11:38.225 ], 00:11:38.225 "product_name": "Malloc disk", 00:11:38.225 "block_size": 512, 00:11:38.225 "num_blocks": 65536, 00:11:38.225 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:38.225 "assigned_rate_limits": { 00:11:38.225 "rw_ios_per_sec": 0, 00:11:38.225 "rw_mbytes_per_sec": 0, 00:11:38.225 "r_mbytes_per_sec": 0, 00:11:38.225 "w_mbytes_per_sec": 0 00:11:38.225 }, 00:11:38.225 "claimed": false, 00:11:38.225 "zoned": false, 00:11:38.225 "supported_io_types": { 00:11:38.225 "read": true, 00:11:38.225 "write": true, 00:11:38.225 "unmap": true, 00:11:38.225 "flush": true, 00:11:38.225 "reset": true, 00:11:38.225 "nvme_admin": false, 00:11:38.225 "nvme_io": false, 00:11:38.225 "nvme_io_md": false, 00:11:38.225 "write_zeroes": true, 00:11:38.225 "zcopy": true, 00:11:38.225 "get_zone_info": false, 00:11:38.225 "zone_management": false, 00:11:38.225 "zone_append": false, 00:11:38.225 "compare": false, 00:11:38.225 "compare_and_write": false, 00:11:38.225 "abort": true, 00:11:38.225 "seek_hole": false, 00:11:38.225 "seek_data": false, 00:11:38.225 "copy": true, 00:11:38.225 "nvme_iov_md": false 00:11:38.225 }, 00:11:38.225 "memory_domains": [ 00:11:38.225 { 00:11:38.225 "dma_device_id": "system", 00:11:38.225 "dma_device_type": 1 00:11:38.225 }, 00:11:38.225 { 00:11:38.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.225 "dma_device_type": 2 00:11:38.225 } 00:11:38.225 ], 00:11:38.225 "driver_specific": {} 00:11:38.225 } 00:11:38.225 ] 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.225 BaseBdev3 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.225 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.225 [ 00:11:38.225 { 00:11:38.225 "name": "BaseBdev3", 00:11:38.225 "aliases": [ 00:11:38.225 "3a380aee-dcf5-4a77-8c3f-6ab3910ec052" 00:11:38.225 ], 00:11:38.225 "product_name": "Malloc disk", 00:11:38.225 "block_size": 512, 00:11:38.225 "num_blocks": 65536, 00:11:38.225 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:38.225 "assigned_rate_limits": { 00:11:38.225 "rw_ios_per_sec": 0, 00:11:38.225 "rw_mbytes_per_sec": 0, 00:11:38.225 "r_mbytes_per_sec": 0, 00:11:38.225 "w_mbytes_per_sec": 0 00:11:38.225 }, 00:11:38.225 "claimed": false, 00:11:38.225 "zoned": false, 00:11:38.225 "supported_io_types": { 00:11:38.225 "read": true, 00:11:38.225 "write": true, 00:11:38.225 "unmap": true, 00:11:38.225 "flush": true, 00:11:38.225 "reset": true, 00:11:38.225 "nvme_admin": false, 00:11:38.225 "nvme_io": false, 00:11:38.225 "nvme_io_md": false, 00:11:38.225 "write_zeroes": true, 00:11:38.225 "zcopy": true, 00:11:38.225 "get_zone_info": false, 00:11:38.225 "zone_management": false, 00:11:38.225 "zone_append": false, 00:11:38.225 "compare": false, 00:11:38.225 "compare_and_write": false, 00:11:38.225 "abort": true, 00:11:38.225 "seek_hole": false, 00:11:38.225 "seek_data": false, 00:11:38.225 "copy": true, 00:11:38.226 "nvme_iov_md": false 00:11:38.226 }, 00:11:38.226 "memory_domains": [ 00:11:38.226 { 00:11:38.226 "dma_device_id": "system", 00:11:38.226 "dma_device_type": 1 00:11:38.226 }, 00:11:38.226 { 00:11:38.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.226 "dma_device_type": 2 00:11:38.226 } 00:11:38.226 ], 00:11:38.226 "driver_specific": {} 00:11:38.226 } 00:11:38.226 ] 00:11:38.226 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.226 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:38.226 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.226 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.226 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.226 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.226 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.484 BaseBdev4 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.484 [ 00:11:38.484 { 00:11:38.484 "name": "BaseBdev4", 00:11:38.484 "aliases": [ 00:11:38.484 "b75bb888-2204-429d-b77e-c744f00e877e" 00:11:38.484 ], 00:11:38.484 "product_name": "Malloc disk", 00:11:38.484 "block_size": 512, 00:11:38.484 "num_blocks": 65536, 00:11:38.484 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:38.484 "assigned_rate_limits": { 00:11:38.484 "rw_ios_per_sec": 0, 00:11:38.484 "rw_mbytes_per_sec": 0, 00:11:38.484 "r_mbytes_per_sec": 0, 00:11:38.484 "w_mbytes_per_sec": 0 00:11:38.484 }, 00:11:38.484 "claimed": false, 00:11:38.484 "zoned": false, 00:11:38.484 "supported_io_types": { 00:11:38.484 "read": true, 00:11:38.484 "write": true, 00:11:38.484 "unmap": true, 00:11:38.484 "flush": true, 00:11:38.484 "reset": true, 00:11:38.484 "nvme_admin": false, 00:11:38.484 "nvme_io": false, 00:11:38.484 "nvme_io_md": false, 00:11:38.484 "write_zeroes": true, 00:11:38.484 "zcopy": true, 00:11:38.484 "get_zone_info": false, 00:11:38.484 "zone_management": false, 00:11:38.484 "zone_append": false, 00:11:38.484 "compare": false, 00:11:38.484 "compare_and_write": false, 00:11:38.484 "abort": true, 00:11:38.484 "seek_hole": false, 00:11:38.484 "seek_data": false, 00:11:38.484 "copy": true, 00:11:38.484 "nvme_iov_md": false 00:11:38.484 }, 00:11:38.484 "memory_domains": [ 00:11:38.484 { 00:11:38.484 "dma_device_id": "system", 00:11:38.484 "dma_device_type": 1 00:11:38.484 }, 00:11:38.484 { 00:11:38.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.484 "dma_device_type": 2 00:11:38.484 } 00:11:38.484 ], 00:11:38.484 "driver_specific": {} 00:11:38.484 } 00:11:38.484 ] 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.484 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.484 [2024-11-15 11:23:21.223308] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.484 [2024-11-15 11:23:21.223365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.484 [2024-11-15 11:23:21.223415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.484 [2024-11-15 11:23:21.226111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.484 [2024-11-15 11:23:21.226204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.485 "name": "Existed_Raid", 00:11:38.485 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:38.485 "strip_size_kb": 64, 00:11:38.485 "state": "configuring", 00:11:38.485 "raid_level": "raid0", 00:11:38.485 "superblock": true, 00:11:38.485 "num_base_bdevs": 4, 00:11:38.485 "num_base_bdevs_discovered": 3, 00:11:38.485 "num_base_bdevs_operational": 4, 00:11:38.485 "base_bdevs_list": [ 00:11:38.485 { 00:11:38.485 "name": "BaseBdev1", 00:11:38.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.485 "is_configured": false, 00:11:38.485 "data_offset": 0, 00:11:38.485 "data_size": 0 00:11:38.485 }, 00:11:38.485 { 00:11:38.485 "name": "BaseBdev2", 00:11:38.485 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:38.485 "is_configured": true, 00:11:38.485 "data_offset": 2048, 00:11:38.485 "data_size": 63488 00:11:38.485 }, 00:11:38.485 { 00:11:38.485 "name": "BaseBdev3", 00:11:38.485 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:38.485 "is_configured": true, 00:11:38.485 "data_offset": 2048, 00:11:38.485 "data_size": 63488 00:11:38.485 }, 00:11:38.485 { 00:11:38.485 "name": "BaseBdev4", 00:11:38.485 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:38.485 "is_configured": true, 00:11:38.485 "data_offset": 2048, 00:11:38.485 "data_size": 63488 00:11:38.485 } 00:11:38.485 ] 00:11:38.485 }' 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.485 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.052 [2024-11-15 11:23:21.755494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.052 "name": "Existed_Raid", 00:11:39.052 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:39.052 "strip_size_kb": 64, 00:11:39.052 "state": "configuring", 00:11:39.052 "raid_level": "raid0", 00:11:39.052 "superblock": true, 00:11:39.052 "num_base_bdevs": 4, 00:11:39.052 "num_base_bdevs_discovered": 2, 00:11:39.052 "num_base_bdevs_operational": 4, 00:11:39.052 "base_bdevs_list": [ 00:11:39.052 { 00:11:39.052 "name": "BaseBdev1", 00:11:39.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.052 "is_configured": false, 00:11:39.052 "data_offset": 0, 00:11:39.052 "data_size": 0 00:11:39.052 }, 00:11:39.052 { 00:11:39.052 "name": null, 00:11:39.052 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:39.052 "is_configured": false, 00:11:39.052 "data_offset": 0, 00:11:39.052 "data_size": 63488 00:11:39.052 }, 00:11:39.052 { 00:11:39.052 "name": "BaseBdev3", 00:11:39.052 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:39.052 "is_configured": true, 00:11:39.052 "data_offset": 2048, 00:11:39.052 "data_size": 63488 00:11:39.052 }, 00:11:39.052 { 00:11:39.052 "name": "BaseBdev4", 00:11:39.052 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:39.052 "is_configured": true, 00:11:39.052 "data_offset": 2048, 00:11:39.052 "data_size": 63488 00:11:39.052 } 00:11:39.052 ] 00:11:39.052 }' 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.052 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.619 [2024-11-15 11:23:22.373488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.619 BaseBdev1 00:11:39.619 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.620 [ 00:11:39.620 { 00:11:39.620 "name": "BaseBdev1", 00:11:39.620 "aliases": [ 00:11:39.620 "2475d5d1-b458-4b85-8c64-34bc14b35624" 00:11:39.620 ], 00:11:39.620 "product_name": "Malloc disk", 00:11:39.620 "block_size": 512, 00:11:39.620 "num_blocks": 65536, 00:11:39.620 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:39.620 "assigned_rate_limits": { 00:11:39.620 "rw_ios_per_sec": 0, 00:11:39.620 "rw_mbytes_per_sec": 0, 00:11:39.620 "r_mbytes_per_sec": 0, 00:11:39.620 "w_mbytes_per_sec": 0 00:11:39.620 }, 00:11:39.620 "claimed": true, 00:11:39.620 "claim_type": "exclusive_write", 00:11:39.620 "zoned": false, 00:11:39.620 "supported_io_types": { 00:11:39.620 "read": true, 00:11:39.620 "write": true, 00:11:39.620 "unmap": true, 00:11:39.620 "flush": true, 00:11:39.620 "reset": true, 00:11:39.620 "nvme_admin": false, 00:11:39.620 "nvme_io": false, 00:11:39.620 "nvme_io_md": false, 00:11:39.620 "write_zeroes": true, 00:11:39.620 "zcopy": true, 00:11:39.620 "get_zone_info": false, 00:11:39.620 "zone_management": false, 00:11:39.620 "zone_append": false, 00:11:39.620 "compare": false, 00:11:39.620 "compare_and_write": false, 00:11:39.620 "abort": true, 00:11:39.620 "seek_hole": false, 00:11:39.620 "seek_data": false, 00:11:39.620 "copy": true, 00:11:39.620 "nvme_iov_md": false 00:11:39.620 }, 00:11:39.620 "memory_domains": [ 00:11:39.620 { 00:11:39.620 "dma_device_id": "system", 00:11:39.620 "dma_device_type": 1 00:11:39.620 }, 00:11:39.620 { 00:11:39.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.620 "dma_device_type": 2 00:11:39.620 } 00:11:39.620 ], 00:11:39.620 "driver_specific": {} 00:11:39.620 } 00:11:39.620 ] 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.620 "name": "Existed_Raid", 00:11:39.620 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:39.620 "strip_size_kb": 64, 00:11:39.620 "state": "configuring", 00:11:39.620 "raid_level": "raid0", 00:11:39.620 "superblock": true, 00:11:39.620 "num_base_bdevs": 4, 00:11:39.620 "num_base_bdevs_discovered": 3, 00:11:39.620 "num_base_bdevs_operational": 4, 00:11:39.620 "base_bdevs_list": [ 00:11:39.620 { 00:11:39.620 "name": "BaseBdev1", 00:11:39.620 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:39.620 "is_configured": true, 00:11:39.620 "data_offset": 2048, 00:11:39.620 "data_size": 63488 00:11:39.620 }, 00:11:39.620 { 00:11:39.620 "name": null, 00:11:39.620 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:39.620 "is_configured": false, 00:11:39.620 "data_offset": 0, 00:11:39.620 "data_size": 63488 00:11:39.620 }, 00:11:39.620 { 00:11:39.620 "name": "BaseBdev3", 00:11:39.620 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:39.620 "is_configured": true, 00:11:39.620 "data_offset": 2048, 00:11:39.620 "data_size": 63488 00:11:39.620 }, 00:11:39.620 { 00:11:39.620 "name": "BaseBdev4", 00:11:39.620 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:39.620 "is_configured": true, 00:11:39.620 "data_offset": 2048, 00:11:39.620 "data_size": 63488 00:11:39.620 } 00:11:39.620 ] 00:11:39.620 }' 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.620 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.187 [2024-11-15 11:23:22.993812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.187 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.187 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.187 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.187 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.187 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.187 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.187 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.187 "name": "Existed_Raid", 00:11:40.187 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:40.187 "strip_size_kb": 64, 00:11:40.187 "state": "configuring", 00:11:40.187 "raid_level": "raid0", 00:11:40.187 "superblock": true, 00:11:40.187 "num_base_bdevs": 4, 00:11:40.187 "num_base_bdevs_discovered": 2, 00:11:40.187 "num_base_bdevs_operational": 4, 00:11:40.187 "base_bdevs_list": [ 00:11:40.187 { 00:11:40.187 "name": "BaseBdev1", 00:11:40.187 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:40.187 "is_configured": true, 00:11:40.187 "data_offset": 2048, 00:11:40.187 "data_size": 63488 00:11:40.187 }, 00:11:40.187 { 00:11:40.187 "name": null, 00:11:40.187 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:40.187 "is_configured": false, 00:11:40.187 "data_offset": 0, 00:11:40.187 "data_size": 63488 00:11:40.187 }, 00:11:40.187 { 00:11:40.187 "name": null, 00:11:40.187 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:40.187 "is_configured": false, 00:11:40.187 "data_offset": 0, 00:11:40.187 "data_size": 63488 00:11:40.187 }, 00:11:40.187 { 00:11:40.187 "name": "BaseBdev4", 00:11:40.187 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:40.187 "is_configured": true, 00:11:40.187 "data_offset": 2048, 00:11:40.187 "data_size": 63488 00:11:40.187 } 00:11:40.187 ] 00:11:40.187 }' 00:11:40.187 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.187 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.753 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.753 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.753 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.753 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.753 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.753 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:40.753 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.753 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.754 [2024-11-15 11:23:23.569967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.754 "name": "Existed_Raid", 00:11:40.754 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:40.754 "strip_size_kb": 64, 00:11:40.754 "state": "configuring", 00:11:40.754 "raid_level": "raid0", 00:11:40.754 "superblock": true, 00:11:40.754 "num_base_bdevs": 4, 00:11:40.754 "num_base_bdevs_discovered": 3, 00:11:40.754 "num_base_bdevs_operational": 4, 00:11:40.754 "base_bdevs_list": [ 00:11:40.754 { 00:11:40.754 "name": "BaseBdev1", 00:11:40.754 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:40.754 "is_configured": true, 00:11:40.754 "data_offset": 2048, 00:11:40.754 "data_size": 63488 00:11:40.754 }, 00:11:40.754 { 00:11:40.754 "name": null, 00:11:40.754 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:40.754 "is_configured": false, 00:11:40.754 "data_offset": 0, 00:11:40.754 "data_size": 63488 00:11:40.754 }, 00:11:40.754 { 00:11:40.754 "name": "BaseBdev3", 00:11:40.754 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:40.754 "is_configured": true, 00:11:40.754 "data_offset": 2048, 00:11:40.754 "data_size": 63488 00:11:40.754 }, 00:11:40.754 { 00:11:40.754 "name": "BaseBdev4", 00:11:40.754 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:40.754 "is_configured": true, 00:11:40.754 "data_offset": 2048, 00:11:40.754 "data_size": 63488 00:11:40.754 } 00:11:40.754 ] 00:11:40.754 }' 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.754 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 [2024-11-15 11:23:24.130206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.321 "name": "Existed_Raid", 00:11:41.321 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:41.321 "strip_size_kb": 64, 00:11:41.321 "state": "configuring", 00:11:41.321 "raid_level": "raid0", 00:11:41.321 "superblock": true, 00:11:41.321 "num_base_bdevs": 4, 00:11:41.321 "num_base_bdevs_discovered": 2, 00:11:41.321 "num_base_bdevs_operational": 4, 00:11:41.321 "base_bdevs_list": [ 00:11:41.321 { 00:11:41.321 "name": null, 00:11:41.321 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:41.321 "is_configured": false, 00:11:41.321 "data_offset": 0, 00:11:41.321 "data_size": 63488 00:11:41.321 }, 00:11:41.321 { 00:11:41.321 "name": null, 00:11:41.321 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:41.321 "is_configured": false, 00:11:41.321 "data_offset": 0, 00:11:41.321 "data_size": 63488 00:11:41.321 }, 00:11:41.321 { 00:11:41.321 "name": "BaseBdev3", 00:11:41.321 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:41.321 "is_configured": true, 00:11:41.321 "data_offset": 2048, 00:11:41.321 "data_size": 63488 00:11:41.321 }, 00:11:41.321 { 00:11:41.321 "name": "BaseBdev4", 00:11:41.321 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:41.321 "is_configured": true, 00:11:41.321 "data_offset": 2048, 00:11:41.321 "data_size": 63488 00:11:41.321 } 00:11:41.321 ] 00:11:41.321 }' 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.321 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.889 [2024-11-15 11:23:24.780307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.889 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.147 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.148 "name": "Existed_Raid", 00:11:42.148 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:42.148 "strip_size_kb": 64, 00:11:42.148 "state": "configuring", 00:11:42.148 "raid_level": "raid0", 00:11:42.148 "superblock": true, 00:11:42.148 "num_base_bdevs": 4, 00:11:42.148 "num_base_bdevs_discovered": 3, 00:11:42.148 "num_base_bdevs_operational": 4, 00:11:42.148 "base_bdevs_list": [ 00:11:42.148 { 00:11:42.148 "name": null, 00:11:42.148 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:42.148 "is_configured": false, 00:11:42.148 "data_offset": 0, 00:11:42.148 "data_size": 63488 00:11:42.148 }, 00:11:42.148 { 00:11:42.148 "name": "BaseBdev2", 00:11:42.148 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:42.148 "is_configured": true, 00:11:42.148 "data_offset": 2048, 00:11:42.148 "data_size": 63488 00:11:42.148 }, 00:11:42.148 { 00:11:42.148 "name": "BaseBdev3", 00:11:42.148 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:42.148 "is_configured": true, 00:11:42.148 "data_offset": 2048, 00:11:42.148 "data_size": 63488 00:11:42.148 }, 00:11:42.148 { 00:11:42.148 "name": "BaseBdev4", 00:11:42.148 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:42.148 "is_configured": true, 00:11:42.148 "data_offset": 2048, 00:11:42.148 "data_size": 63488 00:11:42.148 } 00:11:42.148 ] 00:11:42.148 }' 00:11:42.148 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.148 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.406 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2475d5d1-b458-4b85-8c64-34bc14b35624 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 [2024-11-15 11:23:25.439720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:42.666 NewBaseBdev 00:11:42.666 [2024-11-15 11:23:25.440251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.666 [2024-11-15 11:23:25.440276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:42.666 [2024-11-15 11:23:25.440650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:42.666 [2024-11-15 11:23:25.440827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.666 [2024-11-15 11:23:25.440848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:42.666 [2024-11-15 11:23:25.441027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 [ 00:11:42.666 { 00:11:42.666 "name": "NewBaseBdev", 00:11:42.666 "aliases": [ 00:11:42.666 "2475d5d1-b458-4b85-8c64-34bc14b35624" 00:11:42.666 ], 00:11:42.666 "product_name": "Malloc disk", 00:11:42.666 "block_size": 512, 00:11:42.666 "num_blocks": 65536, 00:11:42.666 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:42.666 "assigned_rate_limits": { 00:11:42.666 "rw_ios_per_sec": 0, 00:11:42.666 "rw_mbytes_per_sec": 0, 00:11:42.666 "r_mbytes_per_sec": 0, 00:11:42.666 "w_mbytes_per_sec": 0 00:11:42.666 }, 00:11:42.666 "claimed": true, 00:11:42.666 "claim_type": "exclusive_write", 00:11:42.666 "zoned": false, 00:11:42.666 "supported_io_types": { 00:11:42.666 "read": true, 00:11:42.666 "write": true, 00:11:42.666 "unmap": true, 00:11:42.666 "flush": true, 00:11:42.666 "reset": true, 00:11:42.666 "nvme_admin": false, 00:11:42.666 "nvme_io": false, 00:11:42.666 "nvme_io_md": false, 00:11:42.666 "write_zeroes": true, 00:11:42.666 "zcopy": true, 00:11:42.666 "get_zone_info": false, 00:11:42.666 "zone_management": false, 00:11:42.666 "zone_append": false, 00:11:42.666 "compare": false, 00:11:42.666 "compare_and_write": false, 00:11:42.666 "abort": true, 00:11:42.666 "seek_hole": false, 00:11:42.666 "seek_data": false, 00:11:42.666 "copy": true, 00:11:42.666 "nvme_iov_md": false 00:11:42.666 }, 00:11:42.666 "memory_domains": [ 00:11:42.666 { 00:11:42.666 "dma_device_id": "system", 00:11:42.666 "dma_device_type": 1 00:11:42.666 }, 00:11:42.666 { 00:11:42.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.666 "dma_device_type": 2 00:11:42.666 } 00:11:42.666 ], 00:11:42.666 "driver_specific": {} 00:11:42.666 } 00:11:42.666 ] 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.666 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.666 "name": "Existed_Raid", 00:11:42.666 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:42.666 "strip_size_kb": 64, 00:11:42.666 "state": "online", 00:11:42.666 "raid_level": "raid0", 00:11:42.666 "superblock": true, 00:11:42.666 "num_base_bdevs": 4, 00:11:42.666 "num_base_bdevs_discovered": 4, 00:11:42.666 "num_base_bdevs_operational": 4, 00:11:42.666 "base_bdevs_list": [ 00:11:42.666 { 00:11:42.666 "name": "NewBaseBdev", 00:11:42.666 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:42.666 "is_configured": true, 00:11:42.666 "data_offset": 2048, 00:11:42.666 "data_size": 63488 00:11:42.666 }, 00:11:42.666 { 00:11:42.666 "name": "BaseBdev2", 00:11:42.666 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:42.666 "is_configured": true, 00:11:42.666 "data_offset": 2048, 00:11:42.666 "data_size": 63488 00:11:42.666 }, 00:11:42.666 { 00:11:42.666 "name": "BaseBdev3", 00:11:42.666 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:42.666 "is_configured": true, 00:11:42.666 "data_offset": 2048, 00:11:42.666 "data_size": 63488 00:11:42.666 }, 00:11:42.666 { 00:11:42.666 "name": "BaseBdev4", 00:11:42.666 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:42.666 "is_configured": true, 00:11:42.666 "data_offset": 2048, 00:11:42.666 "data_size": 63488 00:11:42.666 } 00:11:42.666 ] 00:11:42.667 }' 00:11:42.667 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.667 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.234 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.234 [2024-11-15 11:23:25.996449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.234 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.234 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.234 "name": "Existed_Raid", 00:11:43.234 "aliases": [ 00:11:43.234 "87020c28-12dc-4850-81bb-0be1959f871f" 00:11:43.234 ], 00:11:43.234 "product_name": "Raid Volume", 00:11:43.234 "block_size": 512, 00:11:43.234 "num_blocks": 253952, 00:11:43.234 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:43.234 "assigned_rate_limits": { 00:11:43.234 "rw_ios_per_sec": 0, 00:11:43.234 "rw_mbytes_per_sec": 0, 00:11:43.234 "r_mbytes_per_sec": 0, 00:11:43.234 "w_mbytes_per_sec": 0 00:11:43.234 }, 00:11:43.234 "claimed": false, 00:11:43.234 "zoned": false, 00:11:43.234 "supported_io_types": { 00:11:43.234 "read": true, 00:11:43.234 "write": true, 00:11:43.234 "unmap": true, 00:11:43.234 "flush": true, 00:11:43.234 "reset": true, 00:11:43.235 "nvme_admin": false, 00:11:43.235 "nvme_io": false, 00:11:43.235 "nvme_io_md": false, 00:11:43.235 "write_zeroes": true, 00:11:43.235 "zcopy": false, 00:11:43.235 "get_zone_info": false, 00:11:43.235 "zone_management": false, 00:11:43.235 "zone_append": false, 00:11:43.235 "compare": false, 00:11:43.235 "compare_and_write": false, 00:11:43.235 "abort": false, 00:11:43.235 "seek_hole": false, 00:11:43.235 "seek_data": false, 00:11:43.235 "copy": false, 00:11:43.235 "nvme_iov_md": false 00:11:43.235 }, 00:11:43.235 "memory_domains": [ 00:11:43.235 { 00:11:43.235 "dma_device_id": "system", 00:11:43.235 "dma_device_type": 1 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.235 "dma_device_type": 2 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "dma_device_id": "system", 00:11:43.235 "dma_device_type": 1 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.235 "dma_device_type": 2 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "dma_device_id": "system", 00:11:43.235 "dma_device_type": 1 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.235 "dma_device_type": 2 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "dma_device_id": "system", 00:11:43.235 "dma_device_type": 1 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.235 "dma_device_type": 2 00:11:43.235 } 00:11:43.235 ], 00:11:43.235 "driver_specific": { 00:11:43.235 "raid": { 00:11:43.235 "uuid": "87020c28-12dc-4850-81bb-0be1959f871f", 00:11:43.235 "strip_size_kb": 64, 00:11:43.235 "state": "online", 00:11:43.235 "raid_level": "raid0", 00:11:43.235 "superblock": true, 00:11:43.235 "num_base_bdevs": 4, 00:11:43.235 "num_base_bdevs_discovered": 4, 00:11:43.235 "num_base_bdevs_operational": 4, 00:11:43.235 "base_bdevs_list": [ 00:11:43.235 { 00:11:43.235 "name": "NewBaseBdev", 00:11:43.235 "uuid": "2475d5d1-b458-4b85-8c64-34bc14b35624", 00:11:43.235 "is_configured": true, 00:11:43.235 "data_offset": 2048, 00:11:43.235 "data_size": 63488 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "name": "BaseBdev2", 00:11:43.235 "uuid": "9bd256c8-48ae-4266-a7b2-51794b6dfbc2", 00:11:43.235 "is_configured": true, 00:11:43.235 "data_offset": 2048, 00:11:43.235 "data_size": 63488 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "name": "BaseBdev3", 00:11:43.235 "uuid": "3a380aee-dcf5-4a77-8c3f-6ab3910ec052", 00:11:43.235 "is_configured": true, 00:11:43.235 "data_offset": 2048, 00:11:43.235 "data_size": 63488 00:11:43.235 }, 00:11:43.235 { 00:11:43.235 "name": "BaseBdev4", 00:11:43.235 "uuid": "b75bb888-2204-429d-b77e-c744f00e877e", 00:11:43.235 "is_configured": true, 00:11:43.235 "data_offset": 2048, 00:11:43.235 "data_size": 63488 00:11:43.235 } 00:11:43.235 ] 00:11:43.235 } 00:11:43.235 } 00:11:43.235 }' 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:43.235 BaseBdev2 00:11:43.235 BaseBdev3 00:11:43.235 BaseBdev4' 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.235 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.494 [2024-11-15 11:23:26.387991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.494 [2024-11-15 11:23:26.388217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.494 [2024-11-15 11:23:26.388451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.494 [2024-11-15 11:23:26.388720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.494 [2024-11-15 11:23:26.388749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70023 00:11:43.494 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70023 ']' 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70023 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70023 00:11:43.495 killing process with pid 70023 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70023' 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70023 00:11:43.495 [2024-11-15 11:23:26.427651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.495 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70023 00:11:44.061 [2024-11-15 11:23:26.776038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.027 11:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:45.027 00:11:45.027 real 0m12.841s 00:11:45.027 user 0m21.237s 00:11:45.027 sys 0m1.811s 00:11:45.027 11:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:45.027 ************************************ 00:11:45.027 END TEST raid_state_function_test_sb 00:11:45.027 ************************************ 00:11:45.027 11:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.027 11:23:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:45.027 11:23:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:45.027 11:23:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:45.027 11:23:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.027 ************************************ 00:11:45.027 START TEST raid_superblock_test 00:11:45.027 ************************************ 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70699 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70699 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70699 ']' 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:45.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:45.027 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.286 [2024-11-15 11:23:28.053405] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:45.286 [2024-11-15 11:23:28.053799] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:11:45.286 [2024-11-15 11:23:28.233442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.545 [2024-11-15 11:23:28.379338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.803 [2024-11-15 11:23:28.596677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.803 [2024-11-15 11:23:28.596754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.063 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.323 malloc1 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.323 [2024-11-15 11:23:29.040234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.323 [2024-11-15 11:23:29.040526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.323 [2024-11-15 11:23:29.040730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:46.323 [2024-11-15 11:23:29.040915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.323 [2024-11-15 11:23:29.044298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.323 [2024-11-15 11:23:29.044510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.323 pt1 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.323 malloc2 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.323 [2024-11-15 11:23:29.097446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.323 [2024-11-15 11:23:29.097688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.323 [2024-11-15 11:23:29.097737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:46.323 [2024-11-15 11:23:29.097753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.323 [2024-11-15 11:23:29.100779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.323 [2024-11-15 11:23:29.100820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.323 pt2 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:46.323 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.324 malloc3 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.324 [2024-11-15 11:23:29.163969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.324 [2024-11-15 11:23:29.164227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.324 [2024-11-15 11:23:29.164313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:46.324 [2024-11-15 11:23:29.164515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.324 [2024-11-15 11:23:29.167562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.324 [2024-11-15 11:23:29.167773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.324 pt3 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.324 malloc4 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.324 [2024-11-15 11:23:29.216020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:46.324 [2024-11-15 11:23:29.216124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.324 [2024-11-15 11:23:29.216157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:46.324 [2024-11-15 11:23:29.216171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.324 [2024-11-15 11:23:29.219120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.324 [2024-11-15 11:23:29.219164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:46.324 pt4 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.324 [2024-11-15 11:23:29.224085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.324 [2024-11-15 11:23:29.227114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.324 [2024-11-15 11:23:29.227449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.324 [2024-11-15 11:23:29.227677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:46.324 [2024-11-15 11:23:29.228066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:46.324 [2024-11-15 11:23:29.228105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:46.324 [2024-11-15 11:23:29.228518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:46.324 [2024-11-15 11:23:29.228764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:46.324 [2024-11-15 11:23:29.228785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:46.324 [2024-11-15 11:23:29.229013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.324 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.583 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.583 "name": "raid_bdev1", 00:11:46.583 "uuid": "a7844c7c-2985-4b17-96a6-d281762536ad", 00:11:46.583 "strip_size_kb": 64, 00:11:46.583 "state": "online", 00:11:46.583 "raid_level": "raid0", 00:11:46.583 "superblock": true, 00:11:46.583 "num_base_bdevs": 4, 00:11:46.583 "num_base_bdevs_discovered": 4, 00:11:46.583 "num_base_bdevs_operational": 4, 00:11:46.583 "base_bdevs_list": [ 00:11:46.583 { 00:11:46.583 "name": "pt1", 00:11:46.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.583 "is_configured": true, 00:11:46.583 "data_offset": 2048, 00:11:46.583 "data_size": 63488 00:11:46.583 }, 00:11:46.583 { 00:11:46.583 "name": "pt2", 00:11:46.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.583 "is_configured": true, 00:11:46.583 "data_offset": 2048, 00:11:46.583 "data_size": 63488 00:11:46.583 }, 00:11:46.583 { 00:11:46.584 "name": "pt3", 00:11:46.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.584 "is_configured": true, 00:11:46.584 "data_offset": 2048, 00:11:46.584 "data_size": 63488 00:11:46.584 }, 00:11:46.584 { 00:11:46.584 "name": "pt4", 00:11:46.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.584 "is_configured": true, 00:11:46.584 "data_offset": 2048, 00:11:46.584 "data_size": 63488 00:11:46.584 } 00:11:46.584 ] 00:11:46.584 }' 00:11:46.584 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.584 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 [2024-11-15 11:23:29.712826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.843 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.843 "name": "raid_bdev1", 00:11:46.843 "aliases": [ 00:11:46.843 "a7844c7c-2985-4b17-96a6-d281762536ad" 00:11:46.843 ], 00:11:46.843 "product_name": "Raid Volume", 00:11:46.843 "block_size": 512, 00:11:46.843 "num_blocks": 253952, 00:11:46.843 "uuid": "a7844c7c-2985-4b17-96a6-d281762536ad", 00:11:46.843 "assigned_rate_limits": { 00:11:46.843 "rw_ios_per_sec": 0, 00:11:46.843 "rw_mbytes_per_sec": 0, 00:11:46.843 "r_mbytes_per_sec": 0, 00:11:46.843 "w_mbytes_per_sec": 0 00:11:46.843 }, 00:11:46.843 "claimed": false, 00:11:46.843 "zoned": false, 00:11:46.843 "supported_io_types": { 00:11:46.843 "read": true, 00:11:46.843 "write": true, 00:11:46.843 "unmap": true, 00:11:46.843 "flush": true, 00:11:46.843 "reset": true, 00:11:46.843 "nvme_admin": false, 00:11:46.843 "nvme_io": false, 00:11:46.843 "nvme_io_md": false, 00:11:46.843 "write_zeroes": true, 00:11:46.843 "zcopy": false, 00:11:46.843 "get_zone_info": false, 00:11:46.843 "zone_management": false, 00:11:46.843 "zone_append": false, 00:11:46.843 "compare": false, 00:11:46.843 "compare_and_write": false, 00:11:46.843 "abort": false, 00:11:46.843 "seek_hole": false, 00:11:46.843 "seek_data": false, 00:11:46.843 "copy": false, 00:11:46.843 "nvme_iov_md": false 00:11:46.843 }, 00:11:46.843 "memory_domains": [ 00:11:46.843 { 00:11:46.843 "dma_device_id": "system", 00:11:46.843 "dma_device_type": 1 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.843 "dma_device_type": 2 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "dma_device_id": "system", 00:11:46.843 "dma_device_type": 1 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.843 "dma_device_type": 2 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "dma_device_id": "system", 00:11:46.843 "dma_device_type": 1 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.843 "dma_device_type": 2 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "dma_device_id": "system", 00:11:46.843 "dma_device_type": 1 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.843 "dma_device_type": 2 00:11:46.843 } 00:11:46.843 ], 00:11:46.843 "driver_specific": { 00:11:46.843 "raid": { 00:11:46.843 "uuid": "a7844c7c-2985-4b17-96a6-d281762536ad", 00:11:46.843 "strip_size_kb": 64, 00:11:46.843 "state": "online", 00:11:46.843 "raid_level": "raid0", 00:11:46.843 "superblock": true, 00:11:46.843 "num_base_bdevs": 4, 00:11:46.843 "num_base_bdevs_discovered": 4, 00:11:46.843 "num_base_bdevs_operational": 4, 00:11:46.843 "base_bdevs_list": [ 00:11:46.843 { 00:11:46.843 "name": "pt1", 00:11:46.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.843 "is_configured": true, 00:11:46.843 "data_offset": 2048, 00:11:46.843 "data_size": 63488 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "name": "pt2", 00:11:46.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.844 "is_configured": true, 00:11:46.844 "data_offset": 2048, 00:11:46.844 "data_size": 63488 00:11:46.844 }, 00:11:46.844 { 00:11:46.844 "name": "pt3", 00:11:46.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.844 "is_configured": true, 00:11:46.844 "data_offset": 2048, 00:11:46.844 "data_size": 63488 00:11:46.844 }, 00:11:46.844 { 00:11:46.844 "name": "pt4", 00:11:46.844 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.844 "is_configured": true, 00:11:46.844 "data_offset": 2048, 00:11:46.844 "data_size": 63488 00:11:46.844 } 00:11:46.844 ] 00:11:46.844 } 00:11:46.844 } 00:11:46.844 }' 00:11:46.844 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.103 pt2 00:11:47.103 pt3 00:11:47.103 pt4' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.103 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.103 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.103 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.103 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.103 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:47.103 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.103 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.103 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.103 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.363 [2024-11-15 11:23:30.077033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a7844c7c-2985-4b17-96a6-d281762536ad 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a7844c7c-2985-4b17-96a6-d281762536ad ']' 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.363 [2024-11-15 11:23:30.120564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.363 [2024-11-15 11:23:30.120720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.363 [2024-11-15 11:23:30.120993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.363 [2024-11-15 11:23:30.121223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.363 [2024-11-15 11:23:30.121270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.363 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.364 [2024-11-15 11:23:30.276642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:47.364 [2024-11-15 11:23:30.279460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:47.364 [2024-11-15 11:23:30.279574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:47.364 [2024-11-15 11:23:30.279639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:47.364 [2024-11-15 11:23:30.279709] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:47.364 [2024-11-15 11:23:30.279795] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:47.364 [2024-11-15 11:23:30.279827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:47.364 [2024-11-15 11:23:30.279857] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:47.364 [2024-11-15 11:23:30.279878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.364 [2024-11-15 11:23:30.279895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:47.364 request: 00:11:47.364 { 00:11:47.364 "name": "raid_bdev1", 00:11:47.364 "raid_level": "raid0", 00:11:47.364 "base_bdevs": [ 00:11:47.364 "malloc1", 00:11:47.364 "malloc2", 00:11:47.364 "malloc3", 00:11:47.364 "malloc4" 00:11:47.364 ], 00:11:47.364 "strip_size_kb": 64, 00:11:47.364 "superblock": false, 00:11:47.364 "method": "bdev_raid_create", 00:11:47.364 "req_id": 1 00:11:47.364 } 00:11:47.364 Got JSON-RPC error response 00:11:47.364 response: 00:11:47.364 { 00:11:47.364 "code": -17, 00:11:47.364 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:47.364 } 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.364 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.623 [2024-11-15 11:23:30.340658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.623 [2024-11-15 11:23:30.340887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.623 [2024-11-15 11:23:30.340976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:47.623 [2024-11-15 11:23:30.341213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.623 [2024-11-15 11:23:30.344223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.623 [2024-11-15 11:23:30.344435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.623 [2024-11-15 11:23:30.344644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:47.623 [2024-11-15 11:23:30.344816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.623 pt1 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.623 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.624 "name": "raid_bdev1", 00:11:47.624 "uuid": "a7844c7c-2985-4b17-96a6-d281762536ad", 00:11:47.624 "strip_size_kb": 64, 00:11:47.624 "state": "configuring", 00:11:47.624 "raid_level": "raid0", 00:11:47.624 "superblock": true, 00:11:47.624 "num_base_bdevs": 4, 00:11:47.624 "num_base_bdevs_discovered": 1, 00:11:47.624 "num_base_bdevs_operational": 4, 00:11:47.624 "base_bdevs_list": [ 00:11:47.624 { 00:11:47.624 "name": "pt1", 00:11:47.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.624 "is_configured": true, 00:11:47.624 "data_offset": 2048, 00:11:47.624 "data_size": 63488 00:11:47.624 }, 00:11:47.624 { 00:11:47.624 "name": null, 00:11:47.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.624 "is_configured": false, 00:11:47.624 "data_offset": 2048, 00:11:47.624 "data_size": 63488 00:11:47.624 }, 00:11:47.624 { 00:11:47.624 "name": null, 00:11:47.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.624 "is_configured": false, 00:11:47.624 "data_offset": 2048, 00:11:47.624 "data_size": 63488 00:11:47.624 }, 00:11:47.624 { 00:11:47.624 "name": null, 00:11:47.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.624 "is_configured": false, 00:11:47.624 "data_offset": 2048, 00:11:47.624 "data_size": 63488 00:11:47.624 } 00:11:47.624 ] 00:11:47.624 }' 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.624 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.191 [2024-11-15 11:23:30.852927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.191 [2024-11-15 11:23:30.853238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.191 [2024-11-15 11:23:30.853283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:48.191 [2024-11-15 11:23:30.853303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.191 [2024-11-15 11:23:30.853981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.191 [2024-11-15 11:23:30.854017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.191 [2024-11-15 11:23:30.854160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.191 [2024-11-15 11:23:30.854218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.191 pt2 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.191 [2024-11-15 11:23:30.860880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.191 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.192 "name": "raid_bdev1", 00:11:48.192 "uuid": "a7844c7c-2985-4b17-96a6-d281762536ad", 00:11:48.192 "strip_size_kb": 64, 00:11:48.192 "state": "configuring", 00:11:48.192 "raid_level": "raid0", 00:11:48.192 "superblock": true, 00:11:48.192 "num_base_bdevs": 4, 00:11:48.192 "num_base_bdevs_discovered": 1, 00:11:48.192 "num_base_bdevs_operational": 4, 00:11:48.192 "base_bdevs_list": [ 00:11:48.192 { 00:11:48.192 "name": "pt1", 00:11:48.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.192 "is_configured": true, 00:11:48.192 "data_offset": 2048, 00:11:48.192 "data_size": 63488 00:11:48.192 }, 00:11:48.192 { 00:11:48.192 "name": null, 00:11:48.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.192 "is_configured": false, 00:11:48.192 "data_offset": 0, 00:11:48.192 "data_size": 63488 00:11:48.192 }, 00:11:48.192 { 00:11:48.192 "name": null, 00:11:48.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.192 "is_configured": false, 00:11:48.192 "data_offset": 2048, 00:11:48.192 "data_size": 63488 00:11:48.192 }, 00:11:48.192 { 00:11:48.192 "name": null, 00:11:48.192 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.192 "is_configured": false, 00:11:48.192 "data_offset": 2048, 00:11:48.192 "data_size": 63488 00:11:48.192 } 00:11:48.192 ] 00:11:48.192 }' 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.192 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.450 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:48.450 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.450 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.450 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.450 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.450 [2024-11-15 11:23:31.393052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.450 [2024-11-15 11:23:31.393302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.450 [2024-11-15 11:23:31.393384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:48.450 [2024-11-15 11:23:31.393622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.450 [2024-11-15 11:23:31.394330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.450 [2024-11-15 11:23:31.394364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.450 [2024-11-15 11:23:31.394497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.450 [2024-11-15 11:23:31.394532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.709 pt2 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.709 [2024-11-15 11:23:31.401064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:48.709 [2024-11-15 11:23:31.401318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.709 [2024-11-15 11:23:31.401484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:48.709 [2024-11-15 11:23:31.401622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.709 [2024-11-15 11:23:31.402186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.709 [2024-11-15 11:23:31.402236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:48.709 [2024-11-15 11:23:31.402323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:48.709 [2024-11-15 11:23:31.402361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:48.709 pt3 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.709 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.709 [2024-11-15 11:23:31.408989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:48.709 [2024-11-15 11:23:31.409246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.709 [2024-11-15 11:23:31.409321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:48.709 [2024-11-15 11:23:31.409453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.709 [2024-11-15 11:23:31.410015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.709 [2024-11-15 11:23:31.410208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:48.709 [2024-11-15 11:23:31.410410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:48.709 [2024-11-15 11:23:31.410554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:48.709 [2024-11-15 11:23:31.410750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:48.709 [2024-11-15 11:23:31.410774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:48.709 [2024-11-15 11:23:31.411113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:48.710 [2024-11-15 11:23:31.411494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:48.710 [2024-11-15 11:23:31.411663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:48.710 [2024-11-15 11:23:31.411967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.710 pt4 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.710 "name": "raid_bdev1", 00:11:48.710 "uuid": "a7844c7c-2985-4b17-96a6-d281762536ad", 00:11:48.710 "strip_size_kb": 64, 00:11:48.710 "state": "online", 00:11:48.710 "raid_level": "raid0", 00:11:48.710 "superblock": true, 00:11:48.710 "num_base_bdevs": 4, 00:11:48.710 "num_base_bdevs_discovered": 4, 00:11:48.710 "num_base_bdevs_operational": 4, 00:11:48.710 "base_bdevs_list": [ 00:11:48.710 { 00:11:48.710 "name": "pt1", 00:11:48.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.710 "is_configured": true, 00:11:48.710 "data_offset": 2048, 00:11:48.710 "data_size": 63488 00:11:48.710 }, 00:11:48.710 { 00:11:48.710 "name": "pt2", 00:11:48.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.710 "is_configured": true, 00:11:48.710 "data_offset": 2048, 00:11:48.710 "data_size": 63488 00:11:48.710 }, 00:11:48.710 { 00:11:48.710 "name": "pt3", 00:11:48.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.710 "is_configured": true, 00:11:48.710 "data_offset": 2048, 00:11:48.710 "data_size": 63488 00:11:48.710 }, 00:11:48.710 { 00:11:48.710 "name": "pt4", 00:11:48.710 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.710 "is_configured": true, 00:11:48.710 "data_offset": 2048, 00:11:48.710 "data_size": 63488 00:11:48.710 } 00:11:48.710 ] 00:11:48.710 }' 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.710 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.276 [2024-11-15 11:23:31.937702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.276 "name": "raid_bdev1", 00:11:49.276 "aliases": [ 00:11:49.276 "a7844c7c-2985-4b17-96a6-d281762536ad" 00:11:49.276 ], 00:11:49.276 "product_name": "Raid Volume", 00:11:49.276 "block_size": 512, 00:11:49.276 "num_blocks": 253952, 00:11:49.276 "uuid": "a7844c7c-2985-4b17-96a6-d281762536ad", 00:11:49.276 "assigned_rate_limits": { 00:11:49.276 "rw_ios_per_sec": 0, 00:11:49.276 "rw_mbytes_per_sec": 0, 00:11:49.276 "r_mbytes_per_sec": 0, 00:11:49.276 "w_mbytes_per_sec": 0 00:11:49.276 }, 00:11:49.276 "claimed": false, 00:11:49.276 "zoned": false, 00:11:49.276 "supported_io_types": { 00:11:49.276 "read": true, 00:11:49.276 "write": true, 00:11:49.276 "unmap": true, 00:11:49.276 "flush": true, 00:11:49.276 "reset": true, 00:11:49.276 "nvme_admin": false, 00:11:49.276 "nvme_io": false, 00:11:49.276 "nvme_io_md": false, 00:11:49.276 "write_zeroes": true, 00:11:49.276 "zcopy": false, 00:11:49.276 "get_zone_info": false, 00:11:49.276 "zone_management": false, 00:11:49.276 "zone_append": false, 00:11:49.276 "compare": false, 00:11:49.276 "compare_and_write": false, 00:11:49.276 "abort": false, 00:11:49.276 "seek_hole": false, 00:11:49.276 "seek_data": false, 00:11:49.276 "copy": false, 00:11:49.276 "nvme_iov_md": false 00:11:49.276 }, 00:11:49.276 "memory_domains": [ 00:11:49.276 { 00:11:49.276 "dma_device_id": "system", 00:11:49.276 "dma_device_type": 1 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.276 "dma_device_type": 2 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "dma_device_id": "system", 00:11:49.276 "dma_device_type": 1 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.276 "dma_device_type": 2 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "dma_device_id": "system", 00:11:49.276 "dma_device_type": 1 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.276 "dma_device_type": 2 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "dma_device_id": "system", 00:11:49.276 "dma_device_type": 1 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.276 "dma_device_type": 2 00:11:49.276 } 00:11:49.276 ], 00:11:49.276 "driver_specific": { 00:11:49.276 "raid": { 00:11:49.276 "uuid": "a7844c7c-2985-4b17-96a6-d281762536ad", 00:11:49.276 "strip_size_kb": 64, 00:11:49.276 "state": "online", 00:11:49.276 "raid_level": "raid0", 00:11:49.276 "superblock": true, 00:11:49.276 "num_base_bdevs": 4, 00:11:49.276 "num_base_bdevs_discovered": 4, 00:11:49.276 "num_base_bdevs_operational": 4, 00:11:49.276 "base_bdevs_list": [ 00:11:49.276 { 00:11:49.276 "name": "pt1", 00:11:49.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.276 "is_configured": true, 00:11:49.276 "data_offset": 2048, 00:11:49.276 "data_size": 63488 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "name": "pt2", 00:11:49.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.276 "is_configured": true, 00:11:49.276 "data_offset": 2048, 00:11:49.276 "data_size": 63488 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "name": "pt3", 00:11:49.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.276 "is_configured": true, 00:11:49.276 "data_offset": 2048, 00:11:49.276 "data_size": 63488 00:11:49.276 }, 00:11:49.276 { 00:11:49.276 "name": "pt4", 00:11:49.276 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.276 "is_configured": true, 00:11:49.276 "data_offset": 2048, 00:11:49.276 "data_size": 63488 00:11:49.276 } 00:11:49.276 ] 00:11:49.276 } 00:11:49.276 } 00:11:49.276 }' 00:11:49.276 11:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.276 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:49.276 pt2 00:11:49.276 pt3 00:11:49.276 pt4' 00:11:49.276 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.277 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.535 [2024-11-15 11:23:32.309725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a7844c7c-2985-4b17-96a6-d281762536ad '!=' a7844c7c-2985-4b17-96a6-d281762536ad ']' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70699 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70699 ']' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70699 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70699 00:11:49.535 killing process with pid 70699 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70699' 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70699 00:11:49.535 [2024-11-15 11:23:32.386008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.535 11:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70699 00:11:49.535 [2024-11-15 11:23:32.386208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.535 [2024-11-15 11:23:32.386358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.535 [2024-11-15 11:23:32.386379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:49.793 [2024-11-15 11:23:32.727534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.174 11:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:51.174 00:11:51.174 real 0m5.853s 00:11:51.174 user 0m8.683s 00:11:51.174 sys 0m0.920s 00:11:51.174 ************************************ 00:11:51.174 END TEST raid_superblock_test 00:11:51.174 ************************************ 00:11:51.174 11:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:51.174 11:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.174 11:23:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:51.174 11:23:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:51.174 11:23:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:51.174 11:23:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.174 ************************************ 00:11:51.174 START TEST raid_read_error_test 00:11:51.174 ************************************ 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:51.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oNuffXWlV8 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70969 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70969 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 70969 ']' 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.174 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:51.175 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.175 [2024-11-15 11:23:33.946699] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:51.175 [2024-11-15 11:23:33.946872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70969 ] 00:11:51.175 [2024-11-15 11:23:34.120100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.434 [2024-11-15 11:23:34.262841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.693 [2024-11-15 11:23:34.511941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.693 [2024-11-15 11:23:34.512039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.951 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.951 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:51.951 11:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.951 11:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.951 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.951 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.210 BaseBdev1_malloc 00:11:52.210 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.210 11:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:52.210 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.210 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.210 true 00:11:52.210 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.210 11:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:52.210 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.210 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.210 [2024-11-15 11:23:34.961032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:52.210 [2024-11-15 11:23:34.961123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.210 [2024-11-15 11:23:34.961154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:52.210 [2024-11-15 11:23:34.961189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.210 [2024-11-15 11:23:34.964730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.210 [2024-11-15 11:23:34.964942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.210 BaseBdev1 00:11:52.211 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.211 11:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.211 11:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.211 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.211 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 BaseBdev2_malloc 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 true 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 [2024-11-15 11:23:35.034670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:52.211 [2024-11-15 11:23:35.034913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.211 [2024-11-15 11:23:35.034981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:52.211 [2024-11-15 11:23:35.035114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.211 [2024-11-15 11:23:35.038689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.211 BaseBdev2 00:11:52.211 [2024-11-15 11:23:35.038900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 BaseBdev3_malloc 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 true 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 [2024-11-15 11:23:35.117103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:52.211 [2024-11-15 11:23:35.117370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.211 [2024-11-15 11:23:35.117442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:52.211 [2024-11-15 11:23:35.117645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.211 [2024-11-15 11:23:35.120585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.211 [2024-11-15 11:23:35.120805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:52.211 BaseBdev3 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.211 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.470 BaseBdev4_malloc 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.470 true 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.470 [2024-11-15 11:23:35.184844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:52.470 [2024-11-15 11:23:35.185092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.470 [2024-11-15 11:23:35.185129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:52.470 [2024-11-15 11:23:35.185148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.470 [2024-11-15 11:23:35.188074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.470 [2024-11-15 11:23:35.188139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:52.470 BaseBdev4 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.470 [2024-11-15 11:23:35.193065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.470 [2024-11-15 11:23:35.195835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.470 [2024-11-15 11:23:35.195930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.470 [2024-11-15 11:23:35.196030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.470 [2024-11-15 11:23:35.196423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:52.470 [2024-11-15 11:23:35.196449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:52.470 [2024-11-15 11:23:35.196796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:52.470 [2024-11-15 11:23:35.197021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:52.470 [2024-11-15 11:23:35.197040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:52.470 [2024-11-15 11:23:35.197306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.470 "name": "raid_bdev1", 00:11:52.470 "uuid": "f6dbfc3d-53c7-47b0-ae69-5c12fcb875bb", 00:11:52.470 "strip_size_kb": 64, 00:11:52.470 "state": "online", 00:11:52.470 "raid_level": "raid0", 00:11:52.470 "superblock": true, 00:11:52.470 "num_base_bdevs": 4, 00:11:52.470 "num_base_bdevs_discovered": 4, 00:11:52.470 "num_base_bdevs_operational": 4, 00:11:52.470 "base_bdevs_list": [ 00:11:52.470 { 00:11:52.470 "name": "BaseBdev1", 00:11:52.470 "uuid": "1b22e86e-607f-554f-a9ab-c5eded9c9944", 00:11:52.470 "is_configured": true, 00:11:52.470 "data_offset": 2048, 00:11:52.470 "data_size": 63488 00:11:52.470 }, 00:11:52.470 { 00:11:52.470 "name": "BaseBdev2", 00:11:52.470 "uuid": "7d9ebc96-86c7-56ca-8b9e-9d16a1597ccf", 00:11:52.470 "is_configured": true, 00:11:52.470 "data_offset": 2048, 00:11:52.470 "data_size": 63488 00:11:52.470 }, 00:11:52.470 { 00:11:52.470 "name": "BaseBdev3", 00:11:52.470 "uuid": "00be135e-4d1f-5c20-a301-6dd4f6b6eb63", 00:11:52.470 "is_configured": true, 00:11:52.470 "data_offset": 2048, 00:11:52.470 "data_size": 63488 00:11:52.470 }, 00:11:52.470 { 00:11:52.470 "name": "BaseBdev4", 00:11:52.470 "uuid": "888498fa-77ed-57e3-840e-cbf7b5c73779", 00:11:52.470 "is_configured": true, 00:11:52.470 "data_offset": 2048, 00:11:52.470 "data_size": 63488 00:11:52.470 } 00:11:52.470 ] 00:11:52.470 }' 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.470 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.037 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:53.037 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:53.037 [2024-11-15 11:23:35.855105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.974 "name": "raid_bdev1", 00:11:53.974 "uuid": "f6dbfc3d-53c7-47b0-ae69-5c12fcb875bb", 00:11:53.974 "strip_size_kb": 64, 00:11:53.974 "state": "online", 00:11:53.974 "raid_level": "raid0", 00:11:53.974 "superblock": true, 00:11:53.974 "num_base_bdevs": 4, 00:11:53.974 "num_base_bdevs_discovered": 4, 00:11:53.974 "num_base_bdevs_operational": 4, 00:11:53.974 "base_bdevs_list": [ 00:11:53.974 { 00:11:53.974 "name": "BaseBdev1", 00:11:53.974 "uuid": "1b22e86e-607f-554f-a9ab-c5eded9c9944", 00:11:53.974 "is_configured": true, 00:11:53.974 "data_offset": 2048, 00:11:53.974 "data_size": 63488 00:11:53.974 }, 00:11:53.974 { 00:11:53.974 "name": "BaseBdev2", 00:11:53.974 "uuid": "7d9ebc96-86c7-56ca-8b9e-9d16a1597ccf", 00:11:53.974 "is_configured": true, 00:11:53.974 "data_offset": 2048, 00:11:53.974 "data_size": 63488 00:11:53.974 }, 00:11:53.974 { 00:11:53.974 "name": "BaseBdev3", 00:11:53.974 "uuid": "00be135e-4d1f-5c20-a301-6dd4f6b6eb63", 00:11:53.974 "is_configured": true, 00:11:53.974 "data_offset": 2048, 00:11:53.974 "data_size": 63488 00:11:53.974 }, 00:11:53.974 { 00:11:53.974 "name": "BaseBdev4", 00:11:53.974 "uuid": "888498fa-77ed-57e3-840e-cbf7b5c73779", 00:11:53.974 "is_configured": true, 00:11:53.974 "data_offset": 2048, 00:11:53.974 "data_size": 63488 00:11:53.974 } 00:11:53.974 ] 00:11:53.974 }' 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.974 11:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.541 [2024-11-15 11:23:37.284008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.541 [2024-11-15 11:23:37.284050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.541 { 00:11:54.541 "results": [ 00:11:54.541 { 00:11:54.541 "job": "raid_bdev1", 00:11:54.541 "core_mask": "0x1", 00:11:54.541 "workload": "randrw", 00:11:54.541 "percentage": 50, 00:11:54.541 "status": "finished", 00:11:54.541 "queue_depth": 1, 00:11:54.541 "io_size": 131072, 00:11:54.541 "runtime": 1.426193, 00:11:54.541 "iops": 10176.042092479769, 00:11:54.541 "mibps": 1272.0052615599711, 00:11:54.541 "io_failed": 1, 00:11:54.541 "io_timeout": 0, 00:11:54.541 "avg_latency_us": 137.98750197301663, 00:11:54.541 "min_latency_us": 37.236363636363635, 00:11:54.541 "max_latency_us": 1809.6872727272728 00:11:54.541 } 00:11:54.541 ], 00:11:54.541 "core_count": 1 00:11:54.541 } 00:11:54.541 [2024-11-15 11:23:37.287539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.541 [2024-11-15 11:23:37.287643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.541 [2024-11-15 11:23:37.287702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.541 [2024-11-15 11:23:37.287721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70969 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 70969 ']' 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 70969 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70969 00:11:54.541 killing process with pid 70969 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70969' 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 70969 00:11:54.541 [2024-11-15 11:23:37.325477] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.541 11:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 70969 00:11:54.800 [2024-11-15 11:23:37.603722] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.177 11:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oNuffXWlV8 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:56.178 ************************************ 00:11:56.178 END TEST raid_read_error_test 00:11:56.178 ************************************ 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:56.178 00:11:56.178 real 0m4.959s 00:11:56.178 user 0m5.961s 00:11:56.178 sys 0m0.706s 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.178 11:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.178 11:23:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:56.178 11:23:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:56.178 11:23:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.178 11:23:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.178 ************************************ 00:11:56.178 START TEST raid_write_error_test 00:11:56.178 ************************************ 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d4fCbiYvyI 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71117 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71117 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71117 ']' 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:56.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:56.178 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.178 [2024-11-15 11:23:38.977249] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:56.178 [2024-11-15 11:23:38.977450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71117 ] 00:11:56.437 [2024-11-15 11:23:39.156455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.437 [2024-11-15 11:23:39.311513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.696 [2024-11-15 11:23:39.525809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.696 [2024-11-15 11:23:39.525905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 BaseBdev1_malloc 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 true 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 [2024-11-15 11:23:40.088307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:57.264 [2024-11-15 11:23:40.088594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.264 [2024-11-15 11:23:40.088671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:57.264 [2024-11-15 11:23:40.088894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.264 [2024-11-15 11:23:40.091799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.264 [2024-11-15 11:23:40.091979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.264 BaseBdev1 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 BaseBdev2_malloc 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 true 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 [2024-11-15 11:23:40.148879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:57.265 [2024-11-15 11:23:40.148964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.265 [2024-11-15 11:23:40.148990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:57.265 [2024-11-15 11:23:40.149008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.265 [2024-11-15 11:23:40.152124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.265 [2024-11-15 11:23:40.152368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.265 BaseBdev2 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.265 BaseBdev3_malloc 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.265 true 00:11:57.265 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.524 [2024-11-15 11:23:40.215224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:57.524 [2024-11-15 11:23:40.215593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.524 [2024-11-15 11:23:40.215637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:57.524 [2024-11-15 11:23:40.215658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.524 [2024-11-15 11:23:40.218968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.524 BaseBdev3 00:11:57.524 [2024-11-15 11:23:40.219156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.524 BaseBdev4_malloc 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.524 true 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.524 [2024-11-15 11:23:40.282259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:57.524 [2024-11-15 11:23:40.282463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.524 [2024-11-15 11:23:40.282553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:57.524 [2024-11-15 11:23:40.282771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.524 [2024-11-15 11:23:40.286088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.524 BaseBdev4 00:11:57.524 [2024-11-15 11:23:40.286265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.524 [2024-11-15 11:23:40.290576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.524 [2024-11-15 11:23:40.293394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.524 [2024-11-15 11:23:40.293506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.524 [2024-11-15 11:23:40.293646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.524 [2024-11-15 11:23:40.293933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:57.524 [2024-11-15 11:23:40.293958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:57.524 [2024-11-15 11:23:40.294331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:57.524 [2024-11-15 11:23:40.294570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:57.524 [2024-11-15 11:23:40.294611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:57.524 [2024-11-15 11:23:40.294886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.524 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.524 "name": "raid_bdev1", 00:11:57.524 "uuid": "20d68d65-7749-464c-80f8-fa32ce0f6440", 00:11:57.524 "strip_size_kb": 64, 00:11:57.524 "state": "online", 00:11:57.524 "raid_level": "raid0", 00:11:57.524 "superblock": true, 00:11:57.524 "num_base_bdevs": 4, 00:11:57.524 "num_base_bdevs_discovered": 4, 00:11:57.524 "num_base_bdevs_operational": 4, 00:11:57.524 "base_bdevs_list": [ 00:11:57.524 { 00:11:57.524 "name": "BaseBdev1", 00:11:57.524 "uuid": "24a9a200-97f1-5d50-95d2-06d9a2e18ce1", 00:11:57.524 "is_configured": true, 00:11:57.524 "data_offset": 2048, 00:11:57.524 "data_size": 63488 00:11:57.524 }, 00:11:57.524 { 00:11:57.524 "name": "BaseBdev2", 00:11:57.524 "uuid": "b8b03503-904c-5f0e-96ad-5e8896e3a91c", 00:11:57.524 "is_configured": true, 00:11:57.524 "data_offset": 2048, 00:11:57.524 "data_size": 63488 00:11:57.525 }, 00:11:57.525 { 00:11:57.525 "name": "BaseBdev3", 00:11:57.525 "uuid": "fbb704e8-4eb2-5d96-9697-a5561082dde4", 00:11:57.525 "is_configured": true, 00:11:57.525 "data_offset": 2048, 00:11:57.525 "data_size": 63488 00:11:57.525 }, 00:11:57.525 { 00:11:57.525 "name": "BaseBdev4", 00:11:57.525 "uuid": "92e7ec5a-d147-596a-80c1-cbfe8528276f", 00:11:57.525 "is_configured": true, 00:11:57.525 "data_offset": 2048, 00:11:57.525 "data_size": 63488 00:11:57.525 } 00:11:57.525 ] 00:11:57.525 }' 00:11:57.525 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.525 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.092 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.092 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.092 [2024-11-15 11:23:40.976610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.070 "name": "raid_bdev1", 00:11:59.070 "uuid": "20d68d65-7749-464c-80f8-fa32ce0f6440", 00:11:59.070 "strip_size_kb": 64, 00:11:59.070 "state": "online", 00:11:59.070 "raid_level": "raid0", 00:11:59.070 "superblock": true, 00:11:59.070 "num_base_bdevs": 4, 00:11:59.070 "num_base_bdevs_discovered": 4, 00:11:59.070 "num_base_bdevs_operational": 4, 00:11:59.070 "base_bdevs_list": [ 00:11:59.070 { 00:11:59.070 "name": "BaseBdev1", 00:11:59.070 "uuid": "24a9a200-97f1-5d50-95d2-06d9a2e18ce1", 00:11:59.070 "is_configured": true, 00:11:59.070 "data_offset": 2048, 00:11:59.070 "data_size": 63488 00:11:59.070 }, 00:11:59.070 { 00:11:59.070 "name": "BaseBdev2", 00:11:59.070 "uuid": "b8b03503-904c-5f0e-96ad-5e8896e3a91c", 00:11:59.070 "is_configured": true, 00:11:59.070 "data_offset": 2048, 00:11:59.070 "data_size": 63488 00:11:59.070 }, 00:11:59.070 { 00:11:59.070 "name": "BaseBdev3", 00:11:59.070 "uuid": "fbb704e8-4eb2-5d96-9697-a5561082dde4", 00:11:59.070 "is_configured": true, 00:11:59.070 "data_offset": 2048, 00:11:59.070 "data_size": 63488 00:11:59.070 }, 00:11:59.070 { 00:11:59.070 "name": "BaseBdev4", 00:11:59.070 "uuid": "92e7ec5a-d147-596a-80c1-cbfe8528276f", 00:11:59.070 "is_configured": true, 00:11:59.070 "data_offset": 2048, 00:11:59.070 "data_size": 63488 00:11:59.070 } 00:11:59.070 ] 00:11:59.070 }' 00:11:59.070 11:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.071 11:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.636 [2024-11-15 11:23:42.407474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.636 [2024-11-15 11:23:42.407530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.636 [2024-11-15 11:23:42.411417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.636 { 00:11:59.636 "results": [ 00:11:59.636 { 00:11:59.636 "job": "raid_bdev1", 00:11:59.636 "core_mask": "0x1", 00:11:59.636 "workload": "randrw", 00:11:59.636 "percentage": 50, 00:11:59.636 "status": "finished", 00:11:59.636 "queue_depth": 1, 00:11:59.636 "io_size": 131072, 00:11:59.636 "runtime": 1.428195, 00:11:59.636 "iops": 9264.841285678776, 00:11:59.636 "mibps": 1158.105160709847, 00:11:59.636 "io_failed": 1, 00:11:59.636 "io_timeout": 0, 00:11:59.636 "avg_latency_us": 151.30172804902344, 00:11:59.636 "min_latency_us": 35.14181818181818, 00:11:59.636 "max_latency_us": 1906.5018181818182 00:11:59.636 } 00:11:59.636 ], 00:11:59.636 "core_count": 1 00:11:59.636 } 00:11:59.636 [2024-11-15 11:23:42.411833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.636 [2024-11-15 11:23:42.411915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.636 [2024-11-15 11:23:42.411937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71117 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71117 ']' 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71117 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71117 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71117' 00:11:59.636 killing process with pid 71117 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71117 00:11:59.636 [2024-11-15 11:23:42.456439] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.636 11:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71117 00:11:59.895 [2024-11-15 11:23:42.764514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d4fCbiYvyI 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:01.273 ************************************ 00:12:01.273 END TEST raid_write_error_test 00:12:01.273 ************************************ 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:01.273 00:12:01.273 real 0m5.109s 00:12:01.273 user 0m6.303s 00:12:01.273 sys 0m0.672s 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.273 11:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.273 11:23:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:01.273 11:23:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:01.273 11:23:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:01.273 11:23:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.273 11:23:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.273 ************************************ 00:12:01.273 START TEST raid_state_function_test 00:12:01.273 ************************************ 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.273 Process raid pid: 71269 00:12:01.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71269 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71269' 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71269 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71269 ']' 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:01.273 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.273 [2024-11-15 11:23:44.172517] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:12:01.273 [2024-11-15 11:23:44.173269] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.533 [2024-11-15 11:23:44.361781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.791 [2024-11-15 11:23:44.506054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.792 [2024-11-15 11:23:44.728155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.792 [2024-11-15 11:23:44.728508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.357 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:02.357 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:02.357 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.357 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.357 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.357 [2024-11-15 11:23:45.073657] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.357 [2024-11-15 11:23:45.073890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.357 [2024-11-15 11:23:45.074100] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.357 [2024-11-15 11:23:45.074183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.357 [2024-11-15 11:23:45.074306] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.357 [2024-11-15 11:23:45.074368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.357 [2024-11-15 11:23:45.074408] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.357 [2024-11-15 11:23:45.074549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.357 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.357 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.358 "name": "Existed_Raid", 00:12:02.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.358 "strip_size_kb": 64, 00:12:02.358 "state": "configuring", 00:12:02.358 "raid_level": "concat", 00:12:02.358 "superblock": false, 00:12:02.358 "num_base_bdevs": 4, 00:12:02.358 "num_base_bdevs_discovered": 0, 00:12:02.358 "num_base_bdevs_operational": 4, 00:12:02.358 "base_bdevs_list": [ 00:12:02.358 { 00:12:02.358 "name": "BaseBdev1", 00:12:02.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.358 "is_configured": false, 00:12:02.358 "data_offset": 0, 00:12:02.358 "data_size": 0 00:12:02.358 }, 00:12:02.358 { 00:12:02.358 "name": "BaseBdev2", 00:12:02.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.358 "is_configured": false, 00:12:02.358 "data_offset": 0, 00:12:02.358 "data_size": 0 00:12:02.358 }, 00:12:02.358 { 00:12:02.358 "name": "BaseBdev3", 00:12:02.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.358 "is_configured": false, 00:12:02.358 "data_offset": 0, 00:12:02.358 "data_size": 0 00:12:02.358 }, 00:12:02.358 { 00:12:02.358 "name": "BaseBdev4", 00:12:02.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.358 "is_configured": false, 00:12:02.358 "data_offset": 0, 00:12:02.358 "data_size": 0 00:12:02.358 } 00:12:02.358 ] 00:12:02.358 }' 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.358 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.925 [2024-11-15 11:23:45.585731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.925 [2024-11-15 11:23:45.585778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.925 [2024-11-15 11:23:45.593722] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.925 [2024-11-15 11:23:45.593789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.925 [2024-11-15 11:23:45.593819] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.925 [2024-11-15 11:23:45.593835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.925 [2024-11-15 11:23:45.593845] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.925 [2024-11-15 11:23:45.593860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.925 [2024-11-15 11:23:45.593869] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.925 [2024-11-15 11:23:45.593884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.925 [2024-11-15 11:23:45.640249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.925 BaseBdev1 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:02.925 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.926 [ 00:12:02.926 { 00:12:02.926 "name": "BaseBdev1", 00:12:02.926 "aliases": [ 00:12:02.926 "cd8cc479-c0ba-4465-a107-9a2824c541c2" 00:12:02.926 ], 00:12:02.926 "product_name": "Malloc disk", 00:12:02.926 "block_size": 512, 00:12:02.926 "num_blocks": 65536, 00:12:02.926 "uuid": "cd8cc479-c0ba-4465-a107-9a2824c541c2", 00:12:02.926 "assigned_rate_limits": { 00:12:02.926 "rw_ios_per_sec": 0, 00:12:02.926 "rw_mbytes_per_sec": 0, 00:12:02.926 "r_mbytes_per_sec": 0, 00:12:02.926 "w_mbytes_per_sec": 0 00:12:02.926 }, 00:12:02.926 "claimed": true, 00:12:02.926 "claim_type": "exclusive_write", 00:12:02.926 "zoned": false, 00:12:02.926 "supported_io_types": { 00:12:02.926 "read": true, 00:12:02.926 "write": true, 00:12:02.926 "unmap": true, 00:12:02.926 "flush": true, 00:12:02.926 "reset": true, 00:12:02.926 "nvme_admin": false, 00:12:02.926 "nvme_io": false, 00:12:02.926 "nvme_io_md": false, 00:12:02.926 "write_zeroes": true, 00:12:02.926 "zcopy": true, 00:12:02.926 "get_zone_info": false, 00:12:02.926 "zone_management": false, 00:12:02.926 "zone_append": false, 00:12:02.926 "compare": false, 00:12:02.926 "compare_and_write": false, 00:12:02.926 "abort": true, 00:12:02.926 "seek_hole": false, 00:12:02.926 "seek_data": false, 00:12:02.926 "copy": true, 00:12:02.926 "nvme_iov_md": false 00:12:02.926 }, 00:12:02.926 "memory_domains": [ 00:12:02.926 { 00:12:02.926 "dma_device_id": "system", 00:12:02.926 "dma_device_type": 1 00:12:02.926 }, 00:12:02.926 { 00:12:02.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.926 "dma_device_type": 2 00:12:02.926 } 00:12:02.926 ], 00:12:02.926 "driver_specific": {} 00:12:02.926 } 00:12:02.926 ] 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.926 "name": "Existed_Raid", 00:12:02.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.926 "strip_size_kb": 64, 00:12:02.926 "state": "configuring", 00:12:02.926 "raid_level": "concat", 00:12:02.926 "superblock": false, 00:12:02.926 "num_base_bdevs": 4, 00:12:02.926 "num_base_bdevs_discovered": 1, 00:12:02.926 "num_base_bdevs_operational": 4, 00:12:02.926 "base_bdevs_list": [ 00:12:02.926 { 00:12:02.926 "name": "BaseBdev1", 00:12:02.926 "uuid": "cd8cc479-c0ba-4465-a107-9a2824c541c2", 00:12:02.926 "is_configured": true, 00:12:02.926 "data_offset": 0, 00:12:02.926 "data_size": 65536 00:12:02.926 }, 00:12:02.926 { 00:12:02.926 "name": "BaseBdev2", 00:12:02.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.926 "is_configured": false, 00:12:02.926 "data_offset": 0, 00:12:02.926 "data_size": 0 00:12:02.926 }, 00:12:02.926 { 00:12:02.926 "name": "BaseBdev3", 00:12:02.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.926 "is_configured": false, 00:12:02.926 "data_offset": 0, 00:12:02.926 "data_size": 0 00:12:02.926 }, 00:12:02.926 { 00:12:02.926 "name": "BaseBdev4", 00:12:02.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.926 "is_configured": false, 00:12:02.926 "data_offset": 0, 00:12:02.926 "data_size": 0 00:12:02.926 } 00:12:02.926 ] 00:12:02.926 }' 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.926 11:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.491 [2024-11-15 11:23:46.168497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.491 [2024-11-15 11:23:46.168626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.491 [2024-11-15 11:23:46.176560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.491 [2024-11-15 11:23:46.179232] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.491 [2024-11-15 11:23:46.179449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.491 [2024-11-15 11:23:46.179479] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.491 [2024-11-15 11:23:46.179501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.491 [2024-11-15 11:23:46.179512] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.491 [2024-11-15 11:23:46.179527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.491 "name": "Existed_Raid", 00:12:03.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.491 "strip_size_kb": 64, 00:12:03.491 "state": "configuring", 00:12:03.491 "raid_level": "concat", 00:12:03.491 "superblock": false, 00:12:03.491 "num_base_bdevs": 4, 00:12:03.491 "num_base_bdevs_discovered": 1, 00:12:03.491 "num_base_bdevs_operational": 4, 00:12:03.491 "base_bdevs_list": [ 00:12:03.491 { 00:12:03.491 "name": "BaseBdev1", 00:12:03.491 "uuid": "cd8cc479-c0ba-4465-a107-9a2824c541c2", 00:12:03.491 "is_configured": true, 00:12:03.491 "data_offset": 0, 00:12:03.491 "data_size": 65536 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "name": "BaseBdev2", 00:12:03.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.491 "is_configured": false, 00:12:03.491 "data_offset": 0, 00:12:03.491 "data_size": 0 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "name": "BaseBdev3", 00:12:03.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.491 "is_configured": false, 00:12:03.491 "data_offset": 0, 00:12:03.491 "data_size": 0 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "name": "BaseBdev4", 00:12:03.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.491 "is_configured": false, 00:12:03.491 "data_offset": 0, 00:12:03.491 "data_size": 0 00:12:03.491 } 00:12:03.491 ] 00:12:03.491 }' 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.491 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.059 [2024-11-15 11:23:46.746658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.059 BaseBdev2 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.059 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.059 [ 00:12:04.059 { 00:12:04.059 "name": "BaseBdev2", 00:12:04.059 "aliases": [ 00:12:04.059 "7a2f049c-0f99-42a2-932b-dd96f65115a7" 00:12:04.059 ], 00:12:04.059 "product_name": "Malloc disk", 00:12:04.059 "block_size": 512, 00:12:04.059 "num_blocks": 65536, 00:12:04.059 "uuid": "7a2f049c-0f99-42a2-932b-dd96f65115a7", 00:12:04.059 "assigned_rate_limits": { 00:12:04.059 "rw_ios_per_sec": 0, 00:12:04.059 "rw_mbytes_per_sec": 0, 00:12:04.059 "r_mbytes_per_sec": 0, 00:12:04.059 "w_mbytes_per_sec": 0 00:12:04.059 }, 00:12:04.059 "claimed": true, 00:12:04.059 "claim_type": "exclusive_write", 00:12:04.059 "zoned": false, 00:12:04.059 "supported_io_types": { 00:12:04.059 "read": true, 00:12:04.059 "write": true, 00:12:04.060 "unmap": true, 00:12:04.060 "flush": true, 00:12:04.060 "reset": true, 00:12:04.060 "nvme_admin": false, 00:12:04.060 "nvme_io": false, 00:12:04.060 "nvme_io_md": false, 00:12:04.060 "write_zeroes": true, 00:12:04.060 "zcopy": true, 00:12:04.060 "get_zone_info": false, 00:12:04.060 "zone_management": false, 00:12:04.060 "zone_append": false, 00:12:04.060 "compare": false, 00:12:04.060 "compare_and_write": false, 00:12:04.060 "abort": true, 00:12:04.060 "seek_hole": false, 00:12:04.060 "seek_data": false, 00:12:04.060 "copy": true, 00:12:04.060 "nvme_iov_md": false 00:12:04.060 }, 00:12:04.060 "memory_domains": [ 00:12:04.060 { 00:12:04.060 "dma_device_id": "system", 00:12:04.060 "dma_device_type": 1 00:12:04.060 }, 00:12:04.060 { 00:12:04.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.060 "dma_device_type": 2 00:12:04.060 } 00:12:04.060 ], 00:12:04.060 "driver_specific": {} 00:12:04.060 } 00:12:04.060 ] 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.060 "name": "Existed_Raid", 00:12:04.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.060 "strip_size_kb": 64, 00:12:04.060 "state": "configuring", 00:12:04.060 "raid_level": "concat", 00:12:04.060 "superblock": false, 00:12:04.060 "num_base_bdevs": 4, 00:12:04.060 "num_base_bdevs_discovered": 2, 00:12:04.060 "num_base_bdevs_operational": 4, 00:12:04.060 "base_bdevs_list": [ 00:12:04.060 { 00:12:04.060 "name": "BaseBdev1", 00:12:04.060 "uuid": "cd8cc479-c0ba-4465-a107-9a2824c541c2", 00:12:04.060 "is_configured": true, 00:12:04.060 "data_offset": 0, 00:12:04.060 "data_size": 65536 00:12:04.060 }, 00:12:04.060 { 00:12:04.060 "name": "BaseBdev2", 00:12:04.060 "uuid": "7a2f049c-0f99-42a2-932b-dd96f65115a7", 00:12:04.060 "is_configured": true, 00:12:04.060 "data_offset": 0, 00:12:04.060 "data_size": 65536 00:12:04.060 }, 00:12:04.060 { 00:12:04.060 "name": "BaseBdev3", 00:12:04.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.060 "is_configured": false, 00:12:04.060 "data_offset": 0, 00:12:04.060 "data_size": 0 00:12:04.060 }, 00:12:04.060 { 00:12:04.060 "name": "BaseBdev4", 00:12:04.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.060 "is_configured": false, 00:12:04.060 "data_offset": 0, 00:12:04.060 "data_size": 0 00:12:04.060 } 00:12:04.060 ] 00:12:04.060 }' 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.060 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.626 [2024-11-15 11:23:47.351926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.626 BaseBdev3 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.626 [ 00:12:04.626 { 00:12:04.626 "name": "BaseBdev3", 00:12:04.626 "aliases": [ 00:12:04.626 "17021421-f6f1-421a-8078-4cfc86003e0a" 00:12:04.626 ], 00:12:04.626 "product_name": "Malloc disk", 00:12:04.626 "block_size": 512, 00:12:04.626 "num_blocks": 65536, 00:12:04.626 "uuid": "17021421-f6f1-421a-8078-4cfc86003e0a", 00:12:04.626 "assigned_rate_limits": { 00:12:04.626 "rw_ios_per_sec": 0, 00:12:04.626 "rw_mbytes_per_sec": 0, 00:12:04.626 "r_mbytes_per_sec": 0, 00:12:04.626 "w_mbytes_per_sec": 0 00:12:04.626 }, 00:12:04.626 "claimed": true, 00:12:04.626 "claim_type": "exclusive_write", 00:12:04.626 "zoned": false, 00:12:04.626 "supported_io_types": { 00:12:04.626 "read": true, 00:12:04.626 "write": true, 00:12:04.626 "unmap": true, 00:12:04.626 "flush": true, 00:12:04.626 "reset": true, 00:12:04.626 "nvme_admin": false, 00:12:04.626 "nvme_io": false, 00:12:04.626 "nvme_io_md": false, 00:12:04.626 "write_zeroes": true, 00:12:04.626 "zcopy": true, 00:12:04.626 "get_zone_info": false, 00:12:04.626 "zone_management": false, 00:12:04.626 "zone_append": false, 00:12:04.626 "compare": false, 00:12:04.626 "compare_and_write": false, 00:12:04.626 "abort": true, 00:12:04.626 "seek_hole": false, 00:12:04.626 "seek_data": false, 00:12:04.626 "copy": true, 00:12:04.626 "nvme_iov_md": false 00:12:04.626 }, 00:12:04.626 "memory_domains": [ 00:12:04.626 { 00:12:04.626 "dma_device_id": "system", 00:12:04.626 "dma_device_type": 1 00:12:04.626 }, 00:12:04.626 { 00:12:04.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.626 "dma_device_type": 2 00:12:04.626 } 00:12:04.626 ], 00:12:04.626 "driver_specific": {} 00:12:04.626 } 00:12:04.626 ] 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.626 "name": "Existed_Raid", 00:12:04.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.626 "strip_size_kb": 64, 00:12:04.626 "state": "configuring", 00:12:04.626 "raid_level": "concat", 00:12:04.626 "superblock": false, 00:12:04.626 "num_base_bdevs": 4, 00:12:04.626 "num_base_bdevs_discovered": 3, 00:12:04.626 "num_base_bdevs_operational": 4, 00:12:04.626 "base_bdevs_list": [ 00:12:04.626 { 00:12:04.626 "name": "BaseBdev1", 00:12:04.626 "uuid": "cd8cc479-c0ba-4465-a107-9a2824c541c2", 00:12:04.626 "is_configured": true, 00:12:04.626 "data_offset": 0, 00:12:04.626 "data_size": 65536 00:12:04.626 }, 00:12:04.626 { 00:12:04.626 "name": "BaseBdev2", 00:12:04.626 "uuid": "7a2f049c-0f99-42a2-932b-dd96f65115a7", 00:12:04.626 "is_configured": true, 00:12:04.626 "data_offset": 0, 00:12:04.626 "data_size": 65536 00:12:04.626 }, 00:12:04.626 { 00:12:04.626 "name": "BaseBdev3", 00:12:04.626 "uuid": "17021421-f6f1-421a-8078-4cfc86003e0a", 00:12:04.626 "is_configured": true, 00:12:04.626 "data_offset": 0, 00:12:04.626 "data_size": 65536 00:12:04.626 }, 00:12:04.626 { 00:12:04.626 "name": "BaseBdev4", 00:12:04.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.626 "is_configured": false, 00:12:04.626 "data_offset": 0, 00:12:04.626 "data_size": 0 00:12:04.626 } 00:12:04.626 ] 00:12:04.626 }' 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.626 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.193 [2024-11-15 11:23:47.959971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.193 [2024-11-15 11:23:47.960046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.193 [2024-11-15 11:23:47.960075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:05.193 [2024-11-15 11:23:47.960509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:05.193 [2024-11-15 11:23:47.960795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.193 [2024-11-15 11:23:47.960816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:05.193 [2024-11-15 11:23:47.961137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.193 BaseBdev4 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.193 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.193 [ 00:12:05.193 { 00:12:05.193 "name": "BaseBdev4", 00:12:05.194 "aliases": [ 00:12:05.194 "d9047e49-fbc4-46d6-bc8d-1e3035d57d6f" 00:12:05.194 ], 00:12:05.194 "product_name": "Malloc disk", 00:12:05.194 "block_size": 512, 00:12:05.194 "num_blocks": 65536, 00:12:05.194 "uuid": "d9047e49-fbc4-46d6-bc8d-1e3035d57d6f", 00:12:05.194 "assigned_rate_limits": { 00:12:05.194 "rw_ios_per_sec": 0, 00:12:05.194 "rw_mbytes_per_sec": 0, 00:12:05.194 "r_mbytes_per_sec": 0, 00:12:05.194 "w_mbytes_per_sec": 0 00:12:05.194 }, 00:12:05.194 "claimed": true, 00:12:05.194 "claim_type": "exclusive_write", 00:12:05.194 "zoned": false, 00:12:05.194 "supported_io_types": { 00:12:05.194 "read": true, 00:12:05.194 "write": true, 00:12:05.194 "unmap": true, 00:12:05.194 "flush": true, 00:12:05.194 "reset": true, 00:12:05.194 "nvme_admin": false, 00:12:05.194 "nvme_io": false, 00:12:05.194 "nvme_io_md": false, 00:12:05.194 "write_zeroes": true, 00:12:05.194 "zcopy": true, 00:12:05.194 "get_zone_info": false, 00:12:05.194 "zone_management": false, 00:12:05.194 "zone_append": false, 00:12:05.194 "compare": false, 00:12:05.194 "compare_and_write": false, 00:12:05.194 "abort": true, 00:12:05.194 "seek_hole": false, 00:12:05.194 "seek_data": false, 00:12:05.194 "copy": true, 00:12:05.194 "nvme_iov_md": false 00:12:05.194 }, 00:12:05.194 "memory_domains": [ 00:12:05.194 { 00:12:05.194 "dma_device_id": "system", 00:12:05.194 "dma_device_type": 1 00:12:05.194 }, 00:12:05.194 { 00:12:05.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.194 "dma_device_type": 2 00:12:05.194 } 00:12:05.194 ], 00:12:05.194 "driver_specific": {} 00:12:05.194 } 00:12:05.194 ] 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.194 11:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.194 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.194 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.194 "name": "Existed_Raid", 00:12:05.194 "uuid": "272c5fd0-ba82-4334-a396-faaa35b583f7", 00:12:05.194 "strip_size_kb": 64, 00:12:05.194 "state": "online", 00:12:05.194 "raid_level": "concat", 00:12:05.194 "superblock": false, 00:12:05.194 "num_base_bdevs": 4, 00:12:05.194 "num_base_bdevs_discovered": 4, 00:12:05.194 "num_base_bdevs_operational": 4, 00:12:05.194 "base_bdevs_list": [ 00:12:05.194 { 00:12:05.194 "name": "BaseBdev1", 00:12:05.194 "uuid": "cd8cc479-c0ba-4465-a107-9a2824c541c2", 00:12:05.194 "is_configured": true, 00:12:05.194 "data_offset": 0, 00:12:05.194 "data_size": 65536 00:12:05.194 }, 00:12:05.194 { 00:12:05.194 "name": "BaseBdev2", 00:12:05.194 "uuid": "7a2f049c-0f99-42a2-932b-dd96f65115a7", 00:12:05.194 "is_configured": true, 00:12:05.194 "data_offset": 0, 00:12:05.194 "data_size": 65536 00:12:05.194 }, 00:12:05.194 { 00:12:05.194 "name": "BaseBdev3", 00:12:05.194 "uuid": "17021421-f6f1-421a-8078-4cfc86003e0a", 00:12:05.194 "is_configured": true, 00:12:05.194 "data_offset": 0, 00:12:05.194 "data_size": 65536 00:12:05.194 }, 00:12:05.194 { 00:12:05.194 "name": "BaseBdev4", 00:12:05.194 "uuid": "d9047e49-fbc4-46d6-bc8d-1e3035d57d6f", 00:12:05.194 "is_configured": true, 00:12:05.194 "data_offset": 0, 00:12:05.194 "data_size": 65536 00:12:05.194 } 00:12:05.194 ] 00:12:05.194 }' 00:12:05.194 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.194 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.761 [2024-11-15 11:23:48.512751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.761 "name": "Existed_Raid", 00:12:05.761 "aliases": [ 00:12:05.761 "272c5fd0-ba82-4334-a396-faaa35b583f7" 00:12:05.761 ], 00:12:05.761 "product_name": "Raid Volume", 00:12:05.761 "block_size": 512, 00:12:05.761 "num_blocks": 262144, 00:12:05.761 "uuid": "272c5fd0-ba82-4334-a396-faaa35b583f7", 00:12:05.761 "assigned_rate_limits": { 00:12:05.761 "rw_ios_per_sec": 0, 00:12:05.761 "rw_mbytes_per_sec": 0, 00:12:05.761 "r_mbytes_per_sec": 0, 00:12:05.761 "w_mbytes_per_sec": 0 00:12:05.761 }, 00:12:05.761 "claimed": false, 00:12:05.761 "zoned": false, 00:12:05.761 "supported_io_types": { 00:12:05.761 "read": true, 00:12:05.761 "write": true, 00:12:05.761 "unmap": true, 00:12:05.761 "flush": true, 00:12:05.761 "reset": true, 00:12:05.761 "nvme_admin": false, 00:12:05.761 "nvme_io": false, 00:12:05.761 "nvme_io_md": false, 00:12:05.761 "write_zeroes": true, 00:12:05.761 "zcopy": false, 00:12:05.761 "get_zone_info": false, 00:12:05.761 "zone_management": false, 00:12:05.761 "zone_append": false, 00:12:05.761 "compare": false, 00:12:05.761 "compare_and_write": false, 00:12:05.761 "abort": false, 00:12:05.761 "seek_hole": false, 00:12:05.761 "seek_data": false, 00:12:05.761 "copy": false, 00:12:05.761 "nvme_iov_md": false 00:12:05.761 }, 00:12:05.761 "memory_domains": [ 00:12:05.761 { 00:12:05.761 "dma_device_id": "system", 00:12:05.761 "dma_device_type": 1 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.761 "dma_device_type": 2 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "system", 00:12:05.761 "dma_device_type": 1 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.761 "dma_device_type": 2 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "system", 00:12:05.761 "dma_device_type": 1 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.761 "dma_device_type": 2 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "system", 00:12:05.761 "dma_device_type": 1 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.761 "dma_device_type": 2 00:12:05.761 } 00:12:05.761 ], 00:12:05.761 "driver_specific": { 00:12:05.761 "raid": { 00:12:05.761 "uuid": "272c5fd0-ba82-4334-a396-faaa35b583f7", 00:12:05.761 "strip_size_kb": 64, 00:12:05.761 "state": "online", 00:12:05.761 "raid_level": "concat", 00:12:05.761 "superblock": false, 00:12:05.761 "num_base_bdevs": 4, 00:12:05.761 "num_base_bdevs_discovered": 4, 00:12:05.761 "num_base_bdevs_operational": 4, 00:12:05.761 "base_bdevs_list": [ 00:12:05.761 { 00:12:05.761 "name": "BaseBdev1", 00:12:05.761 "uuid": "cd8cc479-c0ba-4465-a107-9a2824c541c2", 00:12:05.761 "is_configured": true, 00:12:05.761 "data_offset": 0, 00:12:05.761 "data_size": 65536 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "name": "BaseBdev2", 00:12:05.761 "uuid": "7a2f049c-0f99-42a2-932b-dd96f65115a7", 00:12:05.761 "is_configured": true, 00:12:05.761 "data_offset": 0, 00:12:05.761 "data_size": 65536 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "name": "BaseBdev3", 00:12:05.761 "uuid": "17021421-f6f1-421a-8078-4cfc86003e0a", 00:12:05.761 "is_configured": true, 00:12:05.761 "data_offset": 0, 00:12:05.761 "data_size": 65536 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "name": "BaseBdev4", 00:12:05.761 "uuid": "d9047e49-fbc4-46d6-bc8d-1e3035d57d6f", 00:12:05.761 "is_configured": true, 00:12:05.761 "data_offset": 0, 00:12:05.761 "data_size": 65536 00:12:05.761 } 00:12:05.761 ] 00:12:05.761 } 00:12:05.761 } 00:12:05.761 }' 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.761 BaseBdev2 00:12:05.761 BaseBdev3 00:12:05.761 BaseBdev4' 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.761 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.762 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.762 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:05.762 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.762 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.762 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.762 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.020 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:06.021 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.021 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.021 [2024-11-15 11:23:48.884502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.021 [2024-11-15 11:23:48.884572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.021 [2024-11-15 11:23:48.884646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.279 11:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.279 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.279 "name": "Existed_Raid", 00:12:06.279 "uuid": "272c5fd0-ba82-4334-a396-faaa35b583f7", 00:12:06.279 "strip_size_kb": 64, 00:12:06.279 "state": "offline", 00:12:06.279 "raid_level": "concat", 00:12:06.279 "superblock": false, 00:12:06.279 "num_base_bdevs": 4, 00:12:06.279 "num_base_bdevs_discovered": 3, 00:12:06.279 "num_base_bdevs_operational": 3, 00:12:06.279 "base_bdevs_list": [ 00:12:06.279 { 00:12:06.279 "name": null, 00:12:06.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.279 "is_configured": false, 00:12:06.279 "data_offset": 0, 00:12:06.279 "data_size": 65536 00:12:06.279 }, 00:12:06.279 { 00:12:06.279 "name": "BaseBdev2", 00:12:06.279 "uuid": "7a2f049c-0f99-42a2-932b-dd96f65115a7", 00:12:06.279 "is_configured": true, 00:12:06.279 "data_offset": 0, 00:12:06.279 "data_size": 65536 00:12:06.279 }, 00:12:06.279 { 00:12:06.279 "name": "BaseBdev3", 00:12:06.279 "uuid": "17021421-f6f1-421a-8078-4cfc86003e0a", 00:12:06.279 "is_configured": true, 00:12:06.279 "data_offset": 0, 00:12:06.279 "data_size": 65536 00:12:06.279 }, 00:12:06.279 { 00:12:06.279 "name": "BaseBdev4", 00:12:06.279 "uuid": "d9047e49-fbc4-46d6-bc8d-1e3035d57d6f", 00:12:06.279 "is_configured": true, 00:12:06.279 "data_offset": 0, 00:12:06.279 "data_size": 65536 00:12:06.279 } 00:12:06.279 ] 00:12:06.279 }' 00:12:06.279 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.279 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.538 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:06.538 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.538 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.538 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.538 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.538 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.796 [2024-11-15 11:23:49.540012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.796 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.796 [2024-11-15 11:23:49.681513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.055 [2024-11-15 11:23:49.825934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:07.055 [2024-11-15 11:23:49.825995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.055 BaseBdev2 00:12:07.055 11:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.055 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.314 [ 00:12:07.314 { 00:12:07.314 "name": "BaseBdev2", 00:12:07.314 "aliases": [ 00:12:07.314 "2d560c13-fcd6-4493-bee8-62374b4153ab" 00:12:07.314 ], 00:12:07.314 "product_name": "Malloc disk", 00:12:07.314 "block_size": 512, 00:12:07.314 "num_blocks": 65536, 00:12:07.314 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:07.314 "assigned_rate_limits": { 00:12:07.314 "rw_ios_per_sec": 0, 00:12:07.314 "rw_mbytes_per_sec": 0, 00:12:07.314 "r_mbytes_per_sec": 0, 00:12:07.314 "w_mbytes_per_sec": 0 00:12:07.314 }, 00:12:07.314 "claimed": false, 00:12:07.314 "zoned": false, 00:12:07.314 "supported_io_types": { 00:12:07.314 "read": true, 00:12:07.314 "write": true, 00:12:07.314 "unmap": true, 00:12:07.314 "flush": true, 00:12:07.314 "reset": true, 00:12:07.314 "nvme_admin": false, 00:12:07.314 "nvme_io": false, 00:12:07.314 "nvme_io_md": false, 00:12:07.314 "write_zeroes": true, 00:12:07.314 "zcopy": true, 00:12:07.314 "get_zone_info": false, 00:12:07.314 "zone_management": false, 00:12:07.314 "zone_append": false, 00:12:07.314 "compare": false, 00:12:07.314 "compare_and_write": false, 00:12:07.314 "abort": true, 00:12:07.314 "seek_hole": false, 00:12:07.314 "seek_data": false, 00:12:07.314 "copy": true, 00:12:07.314 "nvme_iov_md": false 00:12:07.314 }, 00:12:07.314 "memory_domains": [ 00:12:07.314 { 00:12:07.314 "dma_device_id": "system", 00:12:07.314 "dma_device_type": 1 00:12:07.314 }, 00:12:07.314 { 00:12:07.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.314 "dma_device_type": 2 00:12:07.314 } 00:12:07.314 ], 00:12:07.314 "driver_specific": {} 00:12:07.314 } 00:12:07.314 ] 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.314 BaseBdev3 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.314 [ 00:12:07.314 { 00:12:07.314 "name": "BaseBdev3", 00:12:07.314 "aliases": [ 00:12:07.314 "bdb553a2-de14-4b67-b97b-b3e1f4c815fa" 00:12:07.314 ], 00:12:07.314 "product_name": "Malloc disk", 00:12:07.314 "block_size": 512, 00:12:07.314 "num_blocks": 65536, 00:12:07.314 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:07.314 "assigned_rate_limits": { 00:12:07.314 "rw_ios_per_sec": 0, 00:12:07.314 "rw_mbytes_per_sec": 0, 00:12:07.314 "r_mbytes_per_sec": 0, 00:12:07.314 "w_mbytes_per_sec": 0 00:12:07.314 }, 00:12:07.314 "claimed": false, 00:12:07.314 "zoned": false, 00:12:07.314 "supported_io_types": { 00:12:07.314 "read": true, 00:12:07.314 "write": true, 00:12:07.314 "unmap": true, 00:12:07.314 "flush": true, 00:12:07.314 "reset": true, 00:12:07.314 "nvme_admin": false, 00:12:07.314 "nvme_io": false, 00:12:07.314 "nvme_io_md": false, 00:12:07.314 "write_zeroes": true, 00:12:07.314 "zcopy": true, 00:12:07.314 "get_zone_info": false, 00:12:07.314 "zone_management": false, 00:12:07.314 "zone_append": false, 00:12:07.314 "compare": false, 00:12:07.314 "compare_and_write": false, 00:12:07.314 "abort": true, 00:12:07.314 "seek_hole": false, 00:12:07.314 "seek_data": false, 00:12:07.314 "copy": true, 00:12:07.314 "nvme_iov_md": false 00:12:07.314 }, 00:12:07.314 "memory_domains": [ 00:12:07.314 { 00:12:07.314 "dma_device_id": "system", 00:12:07.314 "dma_device_type": 1 00:12:07.314 }, 00:12:07.314 { 00:12:07.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.314 "dma_device_type": 2 00:12:07.314 } 00:12:07.314 ], 00:12:07.314 "driver_specific": {} 00:12:07.314 } 00:12:07.314 ] 00:12:07.314 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.315 BaseBdev4 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.315 [ 00:12:07.315 { 00:12:07.315 "name": "BaseBdev4", 00:12:07.315 "aliases": [ 00:12:07.315 "b59b73ce-7647-4e16-a855-d7e13feae4ee" 00:12:07.315 ], 00:12:07.315 "product_name": "Malloc disk", 00:12:07.315 "block_size": 512, 00:12:07.315 "num_blocks": 65536, 00:12:07.315 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:07.315 "assigned_rate_limits": { 00:12:07.315 "rw_ios_per_sec": 0, 00:12:07.315 "rw_mbytes_per_sec": 0, 00:12:07.315 "r_mbytes_per_sec": 0, 00:12:07.315 "w_mbytes_per_sec": 0 00:12:07.315 }, 00:12:07.315 "claimed": false, 00:12:07.315 "zoned": false, 00:12:07.315 "supported_io_types": { 00:12:07.315 "read": true, 00:12:07.315 "write": true, 00:12:07.315 "unmap": true, 00:12:07.315 "flush": true, 00:12:07.315 "reset": true, 00:12:07.315 "nvme_admin": false, 00:12:07.315 "nvme_io": false, 00:12:07.315 "nvme_io_md": false, 00:12:07.315 "write_zeroes": true, 00:12:07.315 "zcopy": true, 00:12:07.315 "get_zone_info": false, 00:12:07.315 "zone_management": false, 00:12:07.315 "zone_append": false, 00:12:07.315 "compare": false, 00:12:07.315 "compare_and_write": false, 00:12:07.315 "abort": true, 00:12:07.315 "seek_hole": false, 00:12:07.315 "seek_data": false, 00:12:07.315 "copy": true, 00:12:07.315 "nvme_iov_md": false 00:12:07.315 }, 00:12:07.315 "memory_domains": [ 00:12:07.315 { 00:12:07.315 "dma_device_id": "system", 00:12:07.315 "dma_device_type": 1 00:12:07.315 }, 00:12:07.315 { 00:12:07.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.315 "dma_device_type": 2 00:12:07.315 } 00:12:07.315 ], 00:12:07.315 "driver_specific": {} 00:12:07.315 } 00:12:07.315 ] 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.315 [2024-11-15 11:23:50.190923] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.315 [2024-11-15 11:23:50.191180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.315 [2024-11-15 11:23:50.191260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.315 [2024-11-15 11:23:50.193910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.315 [2024-11-15 11:23:50.193978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.315 "name": "Existed_Raid", 00:12:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.315 "strip_size_kb": 64, 00:12:07.315 "state": "configuring", 00:12:07.315 "raid_level": "concat", 00:12:07.315 "superblock": false, 00:12:07.315 "num_base_bdevs": 4, 00:12:07.315 "num_base_bdevs_discovered": 3, 00:12:07.315 "num_base_bdevs_operational": 4, 00:12:07.315 "base_bdevs_list": [ 00:12:07.315 { 00:12:07.315 "name": "BaseBdev1", 00:12:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.315 "is_configured": false, 00:12:07.315 "data_offset": 0, 00:12:07.315 "data_size": 0 00:12:07.315 }, 00:12:07.315 { 00:12:07.315 "name": "BaseBdev2", 00:12:07.315 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:07.315 "is_configured": true, 00:12:07.315 "data_offset": 0, 00:12:07.315 "data_size": 65536 00:12:07.315 }, 00:12:07.315 { 00:12:07.315 "name": "BaseBdev3", 00:12:07.315 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:07.315 "is_configured": true, 00:12:07.315 "data_offset": 0, 00:12:07.315 "data_size": 65536 00:12:07.315 }, 00:12:07.315 { 00:12:07.315 "name": "BaseBdev4", 00:12:07.315 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:07.315 "is_configured": true, 00:12:07.315 "data_offset": 0, 00:12:07.315 "data_size": 65536 00:12:07.315 } 00:12:07.315 ] 00:12:07.315 }' 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.315 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.882 [2024-11-15 11:23:50.715169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.882 "name": "Existed_Raid", 00:12:07.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.882 "strip_size_kb": 64, 00:12:07.882 "state": "configuring", 00:12:07.882 "raid_level": "concat", 00:12:07.882 "superblock": false, 00:12:07.882 "num_base_bdevs": 4, 00:12:07.882 "num_base_bdevs_discovered": 2, 00:12:07.882 "num_base_bdevs_operational": 4, 00:12:07.882 "base_bdevs_list": [ 00:12:07.882 { 00:12:07.882 "name": "BaseBdev1", 00:12:07.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.882 "is_configured": false, 00:12:07.882 "data_offset": 0, 00:12:07.882 "data_size": 0 00:12:07.882 }, 00:12:07.882 { 00:12:07.882 "name": null, 00:12:07.882 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:07.882 "is_configured": false, 00:12:07.882 "data_offset": 0, 00:12:07.882 "data_size": 65536 00:12:07.882 }, 00:12:07.882 { 00:12:07.882 "name": "BaseBdev3", 00:12:07.882 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:07.882 "is_configured": true, 00:12:07.882 "data_offset": 0, 00:12:07.882 "data_size": 65536 00:12:07.882 }, 00:12:07.882 { 00:12:07.882 "name": "BaseBdev4", 00:12:07.882 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:07.882 "is_configured": true, 00:12:07.882 "data_offset": 0, 00:12:07.882 "data_size": 65536 00:12:07.882 } 00:12:07.882 ] 00:12:07.882 }' 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.882 11:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 [2024-11-15 11:23:51.293122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.449 BaseBdev1 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 [ 00:12:08.449 { 00:12:08.449 "name": "BaseBdev1", 00:12:08.449 "aliases": [ 00:12:08.449 "8deab85b-b815-414f-bce0-64a14420edd6" 00:12:08.449 ], 00:12:08.449 "product_name": "Malloc disk", 00:12:08.449 "block_size": 512, 00:12:08.449 "num_blocks": 65536, 00:12:08.449 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:08.449 "assigned_rate_limits": { 00:12:08.449 "rw_ios_per_sec": 0, 00:12:08.449 "rw_mbytes_per_sec": 0, 00:12:08.449 "r_mbytes_per_sec": 0, 00:12:08.449 "w_mbytes_per_sec": 0 00:12:08.449 }, 00:12:08.449 "claimed": true, 00:12:08.449 "claim_type": "exclusive_write", 00:12:08.449 "zoned": false, 00:12:08.449 "supported_io_types": { 00:12:08.449 "read": true, 00:12:08.449 "write": true, 00:12:08.449 "unmap": true, 00:12:08.449 "flush": true, 00:12:08.449 "reset": true, 00:12:08.449 "nvme_admin": false, 00:12:08.449 "nvme_io": false, 00:12:08.449 "nvme_io_md": false, 00:12:08.449 "write_zeroes": true, 00:12:08.449 "zcopy": true, 00:12:08.449 "get_zone_info": false, 00:12:08.449 "zone_management": false, 00:12:08.449 "zone_append": false, 00:12:08.449 "compare": false, 00:12:08.449 "compare_and_write": false, 00:12:08.449 "abort": true, 00:12:08.449 "seek_hole": false, 00:12:08.449 "seek_data": false, 00:12:08.449 "copy": true, 00:12:08.449 "nvme_iov_md": false 00:12:08.449 }, 00:12:08.449 "memory_domains": [ 00:12:08.449 { 00:12:08.449 "dma_device_id": "system", 00:12:08.449 "dma_device_type": 1 00:12:08.449 }, 00:12:08.449 { 00:12:08.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.449 "dma_device_type": 2 00:12:08.449 } 00:12:08.449 ], 00:12:08.449 "driver_specific": {} 00:12:08.449 } 00:12:08.449 ] 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.449 "name": "Existed_Raid", 00:12:08.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.449 "strip_size_kb": 64, 00:12:08.449 "state": "configuring", 00:12:08.449 "raid_level": "concat", 00:12:08.449 "superblock": false, 00:12:08.449 "num_base_bdevs": 4, 00:12:08.449 "num_base_bdevs_discovered": 3, 00:12:08.449 "num_base_bdevs_operational": 4, 00:12:08.449 "base_bdevs_list": [ 00:12:08.449 { 00:12:08.449 "name": "BaseBdev1", 00:12:08.449 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:08.449 "is_configured": true, 00:12:08.449 "data_offset": 0, 00:12:08.449 "data_size": 65536 00:12:08.449 }, 00:12:08.449 { 00:12:08.449 "name": null, 00:12:08.449 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:08.449 "is_configured": false, 00:12:08.449 "data_offset": 0, 00:12:08.449 "data_size": 65536 00:12:08.449 }, 00:12:08.449 { 00:12:08.449 "name": "BaseBdev3", 00:12:08.449 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:08.449 "is_configured": true, 00:12:08.449 "data_offset": 0, 00:12:08.449 "data_size": 65536 00:12:08.449 }, 00:12:08.449 { 00:12:08.449 "name": "BaseBdev4", 00:12:08.449 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:08.449 "is_configured": true, 00:12:08.449 "data_offset": 0, 00:12:08.449 "data_size": 65536 00:12:08.449 } 00:12:08.449 ] 00:12:08.449 }' 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.449 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.016 [2024-11-15 11:23:51.877443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.016 "name": "Existed_Raid", 00:12:09.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.016 "strip_size_kb": 64, 00:12:09.016 "state": "configuring", 00:12:09.016 "raid_level": "concat", 00:12:09.016 "superblock": false, 00:12:09.016 "num_base_bdevs": 4, 00:12:09.016 "num_base_bdevs_discovered": 2, 00:12:09.016 "num_base_bdevs_operational": 4, 00:12:09.016 "base_bdevs_list": [ 00:12:09.016 { 00:12:09.016 "name": "BaseBdev1", 00:12:09.016 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:09.016 "is_configured": true, 00:12:09.016 "data_offset": 0, 00:12:09.016 "data_size": 65536 00:12:09.016 }, 00:12:09.016 { 00:12:09.016 "name": null, 00:12:09.016 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:09.016 "is_configured": false, 00:12:09.016 "data_offset": 0, 00:12:09.016 "data_size": 65536 00:12:09.016 }, 00:12:09.016 { 00:12:09.016 "name": null, 00:12:09.016 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:09.016 "is_configured": false, 00:12:09.016 "data_offset": 0, 00:12:09.016 "data_size": 65536 00:12:09.016 }, 00:12:09.016 { 00:12:09.016 "name": "BaseBdev4", 00:12:09.016 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:09.016 "is_configured": true, 00:12:09.016 "data_offset": 0, 00:12:09.016 "data_size": 65536 00:12:09.016 } 00:12:09.016 ] 00:12:09.016 }' 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.016 11:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.584 [2024-11-15 11:23:52.429680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.584 "name": "Existed_Raid", 00:12:09.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.584 "strip_size_kb": 64, 00:12:09.584 "state": "configuring", 00:12:09.584 "raid_level": "concat", 00:12:09.584 "superblock": false, 00:12:09.584 "num_base_bdevs": 4, 00:12:09.584 "num_base_bdevs_discovered": 3, 00:12:09.584 "num_base_bdevs_operational": 4, 00:12:09.584 "base_bdevs_list": [ 00:12:09.584 { 00:12:09.584 "name": "BaseBdev1", 00:12:09.584 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:09.584 "is_configured": true, 00:12:09.584 "data_offset": 0, 00:12:09.584 "data_size": 65536 00:12:09.584 }, 00:12:09.584 { 00:12:09.584 "name": null, 00:12:09.584 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:09.584 "is_configured": false, 00:12:09.584 "data_offset": 0, 00:12:09.584 "data_size": 65536 00:12:09.584 }, 00:12:09.584 { 00:12:09.584 "name": "BaseBdev3", 00:12:09.584 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:09.584 "is_configured": true, 00:12:09.584 "data_offset": 0, 00:12:09.584 "data_size": 65536 00:12:09.584 }, 00:12:09.584 { 00:12:09.584 "name": "BaseBdev4", 00:12:09.584 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:09.584 "is_configured": true, 00:12:09.584 "data_offset": 0, 00:12:09.584 "data_size": 65536 00:12:09.584 } 00:12:09.584 ] 00:12:09.584 }' 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.584 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.150 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.150 11:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.150 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.150 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.150 11:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.150 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.151 [2024-11-15 11:23:53.009872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.151 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.409 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.409 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.409 "name": "Existed_Raid", 00:12:10.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.409 "strip_size_kb": 64, 00:12:10.409 "state": "configuring", 00:12:10.409 "raid_level": "concat", 00:12:10.409 "superblock": false, 00:12:10.409 "num_base_bdevs": 4, 00:12:10.409 "num_base_bdevs_discovered": 2, 00:12:10.409 "num_base_bdevs_operational": 4, 00:12:10.409 "base_bdevs_list": [ 00:12:10.409 { 00:12:10.409 "name": null, 00:12:10.409 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:10.409 "is_configured": false, 00:12:10.409 "data_offset": 0, 00:12:10.409 "data_size": 65536 00:12:10.409 }, 00:12:10.409 { 00:12:10.409 "name": null, 00:12:10.409 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:10.409 "is_configured": false, 00:12:10.409 "data_offset": 0, 00:12:10.409 "data_size": 65536 00:12:10.409 }, 00:12:10.409 { 00:12:10.409 "name": "BaseBdev3", 00:12:10.409 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:10.409 "is_configured": true, 00:12:10.409 "data_offset": 0, 00:12:10.409 "data_size": 65536 00:12:10.409 }, 00:12:10.409 { 00:12:10.409 "name": "BaseBdev4", 00:12:10.409 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:10.409 "is_configured": true, 00:12:10.409 "data_offset": 0, 00:12:10.409 "data_size": 65536 00:12:10.409 } 00:12:10.409 ] 00:12:10.409 }' 00:12:10.409 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.409 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.666 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.666 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.666 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.666 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.924 [2024-11-15 11:23:53.653385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.924 "name": "Existed_Raid", 00:12:10.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.924 "strip_size_kb": 64, 00:12:10.924 "state": "configuring", 00:12:10.924 "raid_level": "concat", 00:12:10.924 "superblock": false, 00:12:10.924 "num_base_bdevs": 4, 00:12:10.924 "num_base_bdevs_discovered": 3, 00:12:10.924 "num_base_bdevs_operational": 4, 00:12:10.924 "base_bdevs_list": [ 00:12:10.924 { 00:12:10.924 "name": null, 00:12:10.924 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:10.924 "is_configured": false, 00:12:10.924 "data_offset": 0, 00:12:10.924 "data_size": 65536 00:12:10.924 }, 00:12:10.924 { 00:12:10.924 "name": "BaseBdev2", 00:12:10.924 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:10.924 "is_configured": true, 00:12:10.924 "data_offset": 0, 00:12:10.924 "data_size": 65536 00:12:10.924 }, 00:12:10.924 { 00:12:10.924 "name": "BaseBdev3", 00:12:10.924 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:10.924 "is_configured": true, 00:12:10.924 "data_offset": 0, 00:12:10.924 "data_size": 65536 00:12:10.924 }, 00:12:10.924 { 00:12:10.924 "name": "BaseBdev4", 00:12:10.924 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:10.924 "is_configured": true, 00:12:10.924 "data_offset": 0, 00:12:10.924 "data_size": 65536 00:12:10.924 } 00:12:10.924 ] 00:12:10.924 }' 00:12:10.924 11:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.925 11:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8deab85b-b815-414f-bce0-64a14420edd6 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.491 [2024-11-15 11:23:54.309798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:11.491 [2024-11-15 11:23:54.309860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:11.491 [2024-11-15 11:23:54.309871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:11.491 [2024-11-15 11:23:54.310312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:11.491 [2024-11-15 11:23:54.310571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:11.491 [2024-11-15 11:23:54.310608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:11.491 [2024-11-15 11:23:54.310959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.491 NewBaseBdev 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.491 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.491 [ 00:12:11.491 { 00:12:11.491 "name": "NewBaseBdev", 00:12:11.492 "aliases": [ 00:12:11.492 "8deab85b-b815-414f-bce0-64a14420edd6" 00:12:11.492 ], 00:12:11.492 "product_name": "Malloc disk", 00:12:11.492 "block_size": 512, 00:12:11.492 "num_blocks": 65536, 00:12:11.492 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:11.492 "assigned_rate_limits": { 00:12:11.492 "rw_ios_per_sec": 0, 00:12:11.492 "rw_mbytes_per_sec": 0, 00:12:11.492 "r_mbytes_per_sec": 0, 00:12:11.492 "w_mbytes_per_sec": 0 00:12:11.492 }, 00:12:11.492 "claimed": true, 00:12:11.492 "claim_type": "exclusive_write", 00:12:11.492 "zoned": false, 00:12:11.492 "supported_io_types": { 00:12:11.492 "read": true, 00:12:11.492 "write": true, 00:12:11.492 "unmap": true, 00:12:11.492 "flush": true, 00:12:11.492 "reset": true, 00:12:11.492 "nvme_admin": false, 00:12:11.492 "nvme_io": false, 00:12:11.492 "nvme_io_md": false, 00:12:11.492 "write_zeroes": true, 00:12:11.492 "zcopy": true, 00:12:11.492 "get_zone_info": false, 00:12:11.492 "zone_management": false, 00:12:11.492 "zone_append": false, 00:12:11.492 "compare": false, 00:12:11.492 "compare_and_write": false, 00:12:11.492 "abort": true, 00:12:11.492 "seek_hole": false, 00:12:11.492 "seek_data": false, 00:12:11.492 "copy": true, 00:12:11.492 "nvme_iov_md": false 00:12:11.492 }, 00:12:11.492 "memory_domains": [ 00:12:11.492 { 00:12:11.492 "dma_device_id": "system", 00:12:11.492 "dma_device_type": 1 00:12:11.492 }, 00:12:11.492 { 00:12:11.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.492 "dma_device_type": 2 00:12:11.492 } 00:12:11.492 ], 00:12:11.492 "driver_specific": {} 00:12:11.492 } 00:12:11.492 ] 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.492 "name": "Existed_Raid", 00:12:11.492 "uuid": "16dca8ff-ecac-4406-ad5f-d20a80fc9c3a", 00:12:11.492 "strip_size_kb": 64, 00:12:11.492 "state": "online", 00:12:11.492 "raid_level": "concat", 00:12:11.492 "superblock": false, 00:12:11.492 "num_base_bdevs": 4, 00:12:11.492 "num_base_bdevs_discovered": 4, 00:12:11.492 "num_base_bdevs_operational": 4, 00:12:11.492 "base_bdevs_list": [ 00:12:11.492 { 00:12:11.492 "name": "NewBaseBdev", 00:12:11.492 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:11.492 "is_configured": true, 00:12:11.492 "data_offset": 0, 00:12:11.492 "data_size": 65536 00:12:11.492 }, 00:12:11.492 { 00:12:11.492 "name": "BaseBdev2", 00:12:11.492 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:11.492 "is_configured": true, 00:12:11.492 "data_offset": 0, 00:12:11.492 "data_size": 65536 00:12:11.492 }, 00:12:11.492 { 00:12:11.492 "name": "BaseBdev3", 00:12:11.492 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:11.492 "is_configured": true, 00:12:11.492 "data_offset": 0, 00:12:11.492 "data_size": 65536 00:12:11.492 }, 00:12:11.492 { 00:12:11.492 "name": "BaseBdev4", 00:12:11.492 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:11.492 "is_configured": true, 00:12:11.492 "data_offset": 0, 00:12:11.492 "data_size": 65536 00:12:11.492 } 00:12:11.492 ] 00:12:11.492 }' 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.492 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.058 [2024-11-15 11:23:54.870600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.058 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.058 "name": "Existed_Raid", 00:12:12.058 "aliases": [ 00:12:12.058 "16dca8ff-ecac-4406-ad5f-d20a80fc9c3a" 00:12:12.058 ], 00:12:12.058 "product_name": "Raid Volume", 00:12:12.058 "block_size": 512, 00:12:12.058 "num_blocks": 262144, 00:12:12.058 "uuid": "16dca8ff-ecac-4406-ad5f-d20a80fc9c3a", 00:12:12.058 "assigned_rate_limits": { 00:12:12.058 "rw_ios_per_sec": 0, 00:12:12.058 "rw_mbytes_per_sec": 0, 00:12:12.058 "r_mbytes_per_sec": 0, 00:12:12.058 "w_mbytes_per_sec": 0 00:12:12.058 }, 00:12:12.058 "claimed": false, 00:12:12.058 "zoned": false, 00:12:12.058 "supported_io_types": { 00:12:12.058 "read": true, 00:12:12.059 "write": true, 00:12:12.059 "unmap": true, 00:12:12.059 "flush": true, 00:12:12.059 "reset": true, 00:12:12.059 "nvme_admin": false, 00:12:12.059 "nvme_io": false, 00:12:12.059 "nvme_io_md": false, 00:12:12.059 "write_zeroes": true, 00:12:12.059 "zcopy": false, 00:12:12.059 "get_zone_info": false, 00:12:12.059 "zone_management": false, 00:12:12.059 "zone_append": false, 00:12:12.059 "compare": false, 00:12:12.059 "compare_and_write": false, 00:12:12.059 "abort": false, 00:12:12.059 "seek_hole": false, 00:12:12.059 "seek_data": false, 00:12:12.059 "copy": false, 00:12:12.059 "nvme_iov_md": false 00:12:12.059 }, 00:12:12.059 "memory_domains": [ 00:12:12.059 { 00:12:12.059 "dma_device_id": "system", 00:12:12.059 "dma_device_type": 1 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.059 "dma_device_type": 2 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "dma_device_id": "system", 00:12:12.059 "dma_device_type": 1 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.059 "dma_device_type": 2 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "dma_device_id": "system", 00:12:12.059 "dma_device_type": 1 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.059 "dma_device_type": 2 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "dma_device_id": "system", 00:12:12.059 "dma_device_type": 1 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.059 "dma_device_type": 2 00:12:12.059 } 00:12:12.059 ], 00:12:12.059 "driver_specific": { 00:12:12.059 "raid": { 00:12:12.059 "uuid": "16dca8ff-ecac-4406-ad5f-d20a80fc9c3a", 00:12:12.059 "strip_size_kb": 64, 00:12:12.059 "state": "online", 00:12:12.059 "raid_level": "concat", 00:12:12.059 "superblock": false, 00:12:12.059 "num_base_bdevs": 4, 00:12:12.059 "num_base_bdevs_discovered": 4, 00:12:12.059 "num_base_bdevs_operational": 4, 00:12:12.059 "base_bdevs_list": [ 00:12:12.059 { 00:12:12.059 "name": "NewBaseBdev", 00:12:12.059 "uuid": "8deab85b-b815-414f-bce0-64a14420edd6", 00:12:12.059 "is_configured": true, 00:12:12.059 "data_offset": 0, 00:12:12.059 "data_size": 65536 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "name": "BaseBdev2", 00:12:12.059 "uuid": "2d560c13-fcd6-4493-bee8-62374b4153ab", 00:12:12.059 "is_configured": true, 00:12:12.059 "data_offset": 0, 00:12:12.059 "data_size": 65536 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "name": "BaseBdev3", 00:12:12.059 "uuid": "bdb553a2-de14-4b67-b97b-b3e1f4c815fa", 00:12:12.059 "is_configured": true, 00:12:12.059 "data_offset": 0, 00:12:12.059 "data_size": 65536 00:12:12.059 }, 00:12:12.059 { 00:12:12.059 "name": "BaseBdev4", 00:12:12.059 "uuid": "b59b73ce-7647-4e16-a855-d7e13feae4ee", 00:12:12.059 "is_configured": true, 00:12:12.059 "data_offset": 0, 00:12:12.059 "data_size": 65536 00:12:12.059 } 00:12:12.059 ] 00:12:12.059 } 00:12:12.059 } 00:12:12.059 }' 00:12:12.059 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.059 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:12.059 BaseBdev2 00:12:12.059 BaseBdev3 00:12:12.059 BaseBdev4' 00:12:12.059 11:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.317 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.318 [2024-11-15 11:23:55.250165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.318 [2024-11-15 11:23:55.250273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.318 [2024-11-15 11:23:55.250376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.318 [2024-11-15 11:23:55.250503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.318 [2024-11-15 11:23:55.250547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71269 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71269 ']' 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71269 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:12.318 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71269 00:12:12.576 killing process with pid 71269 00:12:12.576 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:12.576 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:12.576 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71269' 00:12:12.576 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71269 00:12:12.576 [2024-11-15 11:23:55.292763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.576 11:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71269 00:12:12.835 [2024-11-15 11:23:55.640654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.791 ************************************ 00:12:13.791 END TEST raid_state_function_test 00:12:13.791 ************************************ 00:12:13.791 11:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:13.791 00:12:13.791 real 0m12.691s 00:12:13.791 user 0m20.926s 00:12:13.791 sys 0m1.853s 00:12:13.791 11:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.791 11:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.085 11:23:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:14.085 11:23:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:14.085 11:23:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.085 11:23:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.085 ************************************ 00:12:14.085 START TEST raid_state_function_test_sb 00:12:14.085 ************************************ 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71952 00:12:14.085 Process raid pid: 71952 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71952' 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71952 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 71952 ']' 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:14.085 11:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.085 [2024-11-15 11:23:56.880670] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:12:14.085 [2024-11-15 11:23:56.880839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.344 [2024-11-15 11:23:57.060217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.344 [2024-11-15 11:23:57.207067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.601 [2024-11-15 11:23:57.433153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.601 [2024-11-15 11:23:57.433217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.168 [2024-11-15 11:23:57.925399] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.168 [2024-11-15 11:23:57.925473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.168 [2024-11-15 11:23:57.925492] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.168 [2024-11-15 11:23:57.925510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.168 [2024-11-15 11:23:57.925520] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.168 [2024-11-15 11:23:57.925535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.168 [2024-11-15 11:23:57.925545] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:15.168 [2024-11-15 11:23:57.925560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.168 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.168 "name": "Existed_Raid", 00:12:15.168 "uuid": "a59f033f-128f-4b87-9cc1-4c9959d6eba4", 00:12:15.168 "strip_size_kb": 64, 00:12:15.168 "state": "configuring", 00:12:15.168 "raid_level": "concat", 00:12:15.168 "superblock": true, 00:12:15.168 "num_base_bdevs": 4, 00:12:15.168 "num_base_bdevs_discovered": 0, 00:12:15.169 "num_base_bdevs_operational": 4, 00:12:15.169 "base_bdevs_list": [ 00:12:15.169 { 00:12:15.169 "name": "BaseBdev1", 00:12:15.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.169 "is_configured": false, 00:12:15.169 "data_offset": 0, 00:12:15.169 "data_size": 0 00:12:15.169 }, 00:12:15.169 { 00:12:15.169 "name": "BaseBdev2", 00:12:15.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.169 "is_configured": false, 00:12:15.169 "data_offset": 0, 00:12:15.169 "data_size": 0 00:12:15.169 }, 00:12:15.169 { 00:12:15.169 "name": "BaseBdev3", 00:12:15.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.169 "is_configured": false, 00:12:15.169 "data_offset": 0, 00:12:15.169 "data_size": 0 00:12:15.169 }, 00:12:15.169 { 00:12:15.169 "name": "BaseBdev4", 00:12:15.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.169 "is_configured": false, 00:12:15.169 "data_offset": 0, 00:12:15.169 "data_size": 0 00:12:15.169 } 00:12:15.169 ] 00:12:15.169 }' 00:12:15.169 11:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.169 11:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 [2024-11-15 11:23:58.441468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.734 [2024-11-15 11:23:58.441746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 [2024-11-15 11:23:58.453454] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.734 [2024-11-15 11:23:58.453721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.734 [2024-11-15 11:23:58.453890] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.734 [2024-11-15 11:23:58.453960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.734 [2024-11-15 11:23:58.454122] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.734 [2024-11-15 11:23:58.454186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.734 [2024-11-15 11:23:58.454202] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:15.734 [2024-11-15 11:23:58.454220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 [2024-11-15 11:23:58.505787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.734 BaseBdev1 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.734 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 [ 00:12:15.734 { 00:12:15.734 "name": "BaseBdev1", 00:12:15.734 "aliases": [ 00:12:15.734 "5a2b0ee1-8b4e-4c46-a0f3-7bc2758bae7f" 00:12:15.734 ], 00:12:15.734 "product_name": "Malloc disk", 00:12:15.734 "block_size": 512, 00:12:15.734 "num_blocks": 65536, 00:12:15.734 "uuid": "5a2b0ee1-8b4e-4c46-a0f3-7bc2758bae7f", 00:12:15.734 "assigned_rate_limits": { 00:12:15.734 "rw_ios_per_sec": 0, 00:12:15.734 "rw_mbytes_per_sec": 0, 00:12:15.734 "r_mbytes_per_sec": 0, 00:12:15.734 "w_mbytes_per_sec": 0 00:12:15.734 }, 00:12:15.734 "claimed": true, 00:12:15.734 "claim_type": "exclusive_write", 00:12:15.734 "zoned": false, 00:12:15.734 "supported_io_types": { 00:12:15.734 "read": true, 00:12:15.734 "write": true, 00:12:15.734 "unmap": true, 00:12:15.734 "flush": true, 00:12:15.734 "reset": true, 00:12:15.734 "nvme_admin": false, 00:12:15.734 "nvme_io": false, 00:12:15.734 "nvme_io_md": false, 00:12:15.734 "write_zeroes": true, 00:12:15.734 "zcopy": true, 00:12:15.734 "get_zone_info": false, 00:12:15.734 "zone_management": false, 00:12:15.734 "zone_append": false, 00:12:15.735 "compare": false, 00:12:15.735 "compare_and_write": false, 00:12:15.735 "abort": true, 00:12:15.735 "seek_hole": false, 00:12:15.735 "seek_data": false, 00:12:15.735 "copy": true, 00:12:15.735 "nvme_iov_md": false 00:12:15.735 }, 00:12:15.735 "memory_domains": [ 00:12:15.735 { 00:12:15.735 "dma_device_id": "system", 00:12:15.735 "dma_device_type": 1 00:12:15.735 }, 00:12:15.735 { 00:12:15.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.735 "dma_device_type": 2 00:12:15.735 } 00:12:15.735 ], 00:12:15.735 "driver_specific": {} 00:12:15.735 } 00:12:15.735 ] 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.735 "name": "Existed_Raid", 00:12:15.735 "uuid": "fcf886be-15e3-438d-ac78-5031f5439a06", 00:12:15.735 "strip_size_kb": 64, 00:12:15.735 "state": "configuring", 00:12:15.735 "raid_level": "concat", 00:12:15.735 "superblock": true, 00:12:15.735 "num_base_bdevs": 4, 00:12:15.735 "num_base_bdevs_discovered": 1, 00:12:15.735 "num_base_bdevs_operational": 4, 00:12:15.735 "base_bdevs_list": [ 00:12:15.735 { 00:12:15.735 "name": "BaseBdev1", 00:12:15.735 "uuid": "5a2b0ee1-8b4e-4c46-a0f3-7bc2758bae7f", 00:12:15.735 "is_configured": true, 00:12:15.735 "data_offset": 2048, 00:12:15.735 "data_size": 63488 00:12:15.735 }, 00:12:15.735 { 00:12:15.735 "name": "BaseBdev2", 00:12:15.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.735 "is_configured": false, 00:12:15.735 "data_offset": 0, 00:12:15.735 "data_size": 0 00:12:15.735 }, 00:12:15.735 { 00:12:15.735 "name": "BaseBdev3", 00:12:15.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.735 "is_configured": false, 00:12:15.735 "data_offset": 0, 00:12:15.735 "data_size": 0 00:12:15.735 }, 00:12:15.735 { 00:12:15.735 "name": "BaseBdev4", 00:12:15.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.735 "is_configured": false, 00:12:15.735 "data_offset": 0, 00:12:15.735 "data_size": 0 00:12:15.735 } 00:12:15.735 ] 00:12:15.735 }' 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.735 11:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.300 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.300 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.300 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.301 [2024-11-15 11:23:59.069952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.301 [2024-11-15 11:23:59.070009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.301 [2024-11-15 11:23:59.078062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.301 [2024-11-15 11:23:59.081029] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.301 [2024-11-15 11:23:59.081263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.301 [2024-11-15 11:23:59.081394] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.301 [2024-11-15 11:23:59.081535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.301 [2024-11-15 11:23:59.081672] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:16.301 [2024-11-15 11:23:59.081818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.301 "name": "Existed_Raid", 00:12:16.301 "uuid": "e69cd1ca-abbd-4a77-a8d3-e73160729dbd", 00:12:16.301 "strip_size_kb": 64, 00:12:16.301 "state": "configuring", 00:12:16.301 "raid_level": "concat", 00:12:16.301 "superblock": true, 00:12:16.301 "num_base_bdevs": 4, 00:12:16.301 "num_base_bdevs_discovered": 1, 00:12:16.301 "num_base_bdevs_operational": 4, 00:12:16.301 "base_bdevs_list": [ 00:12:16.301 { 00:12:16.301 "name": "BaseBdev1", 00:12:16.301 "uuid": "5a2b0ee1-8b4e-4c46-a0f3-7bc2758bae7f", 00:12:16.301 "is_configured": true, 00:12:16.301 "data_offset": 2048, 00:12:16.301 "data_size": 63488 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "BaseBdev2", 00:12:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.301 "is_configured": false, 00:12:16.301 "data_offset": 0, 00:12:16.301 "data_size": 0 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "BaseBdev3", 00:12:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.301 "is_configured": false, 00:12:16.301 "data_offset": 0, 00:12:16.301 "data_size": 0 00:12:16.301 }, 00:12:16.301 { 00:12:16.301 "name": "BaseBdev4", 00:12:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.301 "is_configured": false, 00:12:16.301 "data_offset": 0, 00:12:16.301 "data_size": 0 00:12:16.301 } 00:12:16.301 ] 00:12:16.301 }' 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.301 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.866 [2024-11-15 11:23:59.639333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.866 BaseBdev2 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.866 [ 00:12:16.866 { 00:12:16.866 "name": "BaseBdev2", 00:12:16.866 "aliases": [ 00:12:16.866 "4411f2f4-b098-4013-906f-a15b28921e3b" 00:12:16.866 ], 00:12:16.866 "product_name": "Malloc disk", 00:12:16.866 "block_size": 512, 00:12:16.866 "num_blocks": 65536, 00:12:16.866 "uuid": "4411f2f4-b098-4013-906f-a15b28921e3b", 00:12:16.866 "assigned_rate_limits": { 00:12:16.866 "rw_ios_per_sec": 0, 00:12:16.866 "rw_mbytes_per_sec": 0, 00:12:16.866 "r_mbytes_per_sec": 0, 00:12:16.866 "w_mbytes_per_sec": 0 00:12:16.866 }, 00:12:16.866 "claimed": true, 00:12:16.866 "claim_type": "exclusive_write", 00:12:16.866 "zoned": false, 00:12:16.866 "supported_io_types": { 00:12:16.866 "read": true, 00:12:16.866 "write": true, 00:12:16.866 "unmap": true, 00:12:16.866 "flush": true, 00:12:16.866 "reset": true, 00:12:16.866 "nvme_admin": false, 00:12:16.866 "nvme_io": false, 00:12:16.866 "nvme_io_md": false, 00:12:16.866 "write_zeroes": true, 00:12:16.866 "zcopy": true, 00:12:16.866 "get_zone_info": false, 00:12:16.866 "zone_management": false, 00:12:16.866 "zone_append": false, 00:12:16.866 "compare": false, 00:12:16.866 "compare_and_write": false, 00:12:16.866 "abort": true, 00:12:16.866 "seek_hole": false, 00:12:16.866 "seek_data": false, 00:12:16.866 "copy": true, 00:12:16.866 "nvme_iov_md": false 00:12:16.866 }, 00:12:16.866 "memory_domains": [ 00:12:16.866 { 00:12:16.866 "dma_device_id": "system", 00:12:16.866 "dma_device_type": 1 00:12:16.866 }, 00:12:16.866 { 00:12:16.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.866 "dma_device_type": 2 00:12:16.866 } 00:12:16.866 ], 00:12:16.866 "driver_specific": {} 00:12:16.866 } 00:12:16.866 ] 00:12:16.866 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.867 "name": "Existed_Raid", 00:12:16.867 "uuid": "e69cd1ca-abbd-4a77-a8d3-e73160729dbd", 00:12:16.867 "strip_size_kb": 64, 00:12:16.867 "state": "configuring", 00:12:16.867 "raid_level": "concat", 00:12:16.867 "superblock": true, 00:12:16.867 "num_base_bdevs": 4, 00:12:16.867 "num_base_bdevs_discovered": 2, 00:12:16.867 "num_base_bdevs_operational": 4, 00:12:16.867 "base_bdevs_list": [ 00:12:16.867 { 00:12:16.867 "name": "BaseBdev1", 00:12:16.867 "uuid": "5a2b0ee1-8b4e-4c46-a0f3-7bc2758bae7f", 00:12:16.867 "is_configured": true, 00:12:16.867 "data_offset": 2048, 00:12:16.867 "data_size": 63488 00:12:16.867 }, 00:12:16.867 { 00:12:16.867 "name": "BaseBdev2", 00:12:16.867 "uuid": "4411f2f4-b098-4013-906f-a15b28921e3b", 00:12:16.867 "is_configured": true, 00:12:16.867 "data_offset": 2048, 00:12:16.867 "data_size": 63488 00:12:16.867 }, 00:12:16.867 { 00:12:16.867 "name": "BaseBdev3", 00:12:16.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.867 "is_configured": false, 00:12:16.867 "data_offset": 0, 00:12:16.867 "data_size": 0 00:12:16.867 }, 00:12:16.867 { 00:12:16.867 "name": "BaseBdev4", 00:12:16.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.867 "is_configured": false, 00:12:16.867 "data_offset": 0, 00:12:16.867 "data_size": 0 00:12:16.867 } 00:12:16.867 ] 00:12:16.867 }' 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.867 11:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.432 [2024-11-15 11:24:00.253575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.432 BaseBdev3 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.432 [ 00:12:17.432 { 00:12:17.432 "name": "BaseBdev3", 00:12:17.432 "aliases": [ 00:12:17.432 "9be05ff1-8647-4dba-aa7a-580582285550" 00:12:17.432 ], 00:12:17.432 "product_name": "Malloc disk", 00:12:17.432 "block_size": 512, 00:12:17.432 "num_blocks": 65536, 00:12:17.432 "uuid": "9be05ff1-8647-4dba-aa7a-580582285550", 00:12:17.432 "assigned_rate_limits": { 00:12:17.432 "rw_ios_per_sec": 0, 00:12:17.432 "rw_mbytes_per_sec": 0, 00:12:17.432 "r_mbytes_per_sec": 0, 00:12:17.432 "w_mbytes_per_sec": 0 00:12:17.432 }, 00:12:17.432 "claimed": true, 00:12:17.432 "claim_type": "exclusive_write", 00:12:17.432 "zoned": false, 00:12:17.432 "supported_io_types": { 00:12:17.432 "read": true, 00:12:17.432 "write": true, 00:12:17.432 "unmap": true, 00:12:17.432 "flush": true, 00:12:17.432 "reset": true, 00:12:17.432 "nvme_admin": false, 00:12:17.432 "nvme_io": false, 00:12:17.432 "nvme_io_md": false, 00:12:17.432 "write_zeroes": true, 00:12:17.432 "zcopy": true, 00:12:17.432 "get_zone_info": false, 00:12:17.432 "zone_management": false, 00:12:17.432 "zone_append": false, 00:12:17.432 "compare": false, 00:12:17.432 "compare_and_write": false, 00:12:17.432 "abort": true, 00:12:17.432 "seek_hole": false, 00:12:17.432 "seek_data": false, 00:12:17.432 "copy": true, 00:12:17.432 "nvme_iov_md": false 00:12:17.432 }, 00:12:17.432 "memory_domains": [ 00:12:17.432 { 00:12:17.432 "dma_device_id": "system", 00:12:17.432 "dma_device_type": 1 00:12:17.432 }, 00:12:17.432 { 00:12:17.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.432 "dma_device_type": 2 00:12:17.432 } 00:12:17.432 ], 00:12:17.432 "driver_specific": {} 00:12:17.432 } 00:12:17.432 ] 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.432 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.433 "name": "Existed_Raid", 00:12:17.433 "uuid": "e69cd1ca-abbd-4a77-a8d3-e73160729dbd", 00:12:17.433 "strip_size_kb": 64, 00:12:17.433 "state": "configuring", 00:12:17.433 "raid_level": "concat", 00:12:17.433 "superblock": true, 00:12:17.433 "num_base_bdevs": 4, 00:12:17.433 "num_base_bdevs_discovered": 3, 00:12:17.433 "num_base_bdevs_operational": 4, 00:12:17.433 "base_bdevs_list": [ 00:12:17.433 { 00:12:17.433 "name": "BaseBdev1", 00:12:17.433 "uuid": "5a2b0ee1-8b4e-4c46-a0f3-7bc2758bae7f", 00:12:17.433 "is_configured": true, 00:12:17.433 "data_offset": 2048, 00:12:17.433 "data_size": 63488 00:12:17.433 }, 00:12:17.433 { 00:12:17.433 "name": "BaseBdev2", 00:12:17.433 "uuid": "4411f2f4-b098-4013-906f-a15b28921e3b", 00:12:17.433 "is_configured": true, 00:12:17.433 "data_offset": 2048, 00:12:17.433 "data_size": 63488 00:12:17.433 }, 00:12:17.433 { 00:12:17.433 "name": "BaseBdev3", 00:12:17.433 "uuid": "9be05ff1-8647-4dba-aa7a-580582285550", 00:12:17.433 "is_configured": true, 00:12:17.433 "data_offset": 2048, 00:12:17.433 "data_size": 63488 00:12:17.433 }, 00:12:17.433 { 00:12:17.433 "name": "BaseBdev4", 00:12:17.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.433 "is_configured": false, 00:12:17.433 "data_offset": 0, 00:12:17.433 "data_size": 0 00:12:17.433 } 00:12:17.433 ] 00:12:17.433 }' 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.433 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.999 [2024-11-15 11:24:00.843313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:17.999 [2024-11-15 11:24:00.843677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:17.999 [2024-11-15 11:24:00.843697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.999 BaseBdev4 00:12:17.999 [2024-11-15 11:24:00.844061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:17.999 [2024-11-15 11:24:00.844290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:17.999 [2024-11-15 11:24:00.844330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:17.999 [2024-11-15 11:24:00.844525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.999 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.000 [ 00:12:18.000 { 00:12:18.000 "name": "BaseBdev4", 00:12:18.000 "aliases": [ 00:12:18.000 "d2d86401-b992-4ea5-9e53-c5be70e18522" 00:12:18.000 ], 00:12:18.000 "product_name": "Malloc disk", 00:12:18.000 "block_size": 512, 00:12:18.000 "num_blocks": 65536, 00:12:18.000 "uuid": "d2d86401-b992-4ea5-9e53-c5be70e18522", 00:12:18.000 "assigned_rate_limits": { 00:12:18.000 "rw_ios_per_sec": 0, 00:12:18.000 "rw_mbytes_per_sec": 0, 00:12:18.000 "r_mbytes_per_sec": 0, 00:12:18.000 "w_mbytes_per_sec": 0 00:12:18.000 }, 00:12:18.000 "claimed": true, 00:12:18.000 "claim_type": "exclusive_write", 00:12:18.000 "zoned": false, 00:12:18.000 "supported_io_types": { 00:12:18.000 "read": true, 00:12:18.000 "write": true, 00:12:18.000 "unmap": true, 00:12:18.000 "flush": true, 00:12:18.000 "reset": true, 00:12:18.000 "nvme_admin": false, 00:12:18.000 "nvme_io": false, 00:12:18.000 "nvme_io_md": false, 00:12:18.000 "write_zeroes": true, 00:12:18.000 "zcopy": true, 00:12:18.000 "get_zone_info": false, 00:12:18.000 "zone_management": false, 00:12:18.000 "zone_append": false, 00:12:18.000 "compare": false, 00:12:18.000 "compare_and_write": false, 00:12:18.000 "abort": true, 00:12:18.000 "seek_hole": false, 00:12:18.000 "seek_data": false, 00:12:18.000 "copy": true, 00:12:18.000 "nvme_iov_md": false 00:12:18.000 }, 00:12:18.000 "memory_domains": [ 00:12:18.000 { 00:12:18.000 "dma_device_id": "system", 00:12:18.000 "dma_device_type": 1 00:12:18.000 }, 00:12:18.000 { 00:12:18.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.000 "dma_device_type": 2 00:12:18.000 } 00:12:18.000 ], 00:12:18.000 "driver_specific": {} 00:12:18.000 } 00:12:18.000 ] 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.000 "name": "Existed_Raid", 00:12:18.000 "uuid": "e69cd1ca-abbd-4a77-a8d3-e73160729dbd", 00:12:18.000 "strip_size_kb": 64, 00:12:18.000 "state": "online", 00:12:18.000 "raid_level": "concat", 00:12:18.000 "superblock": true, 00:12:18.000 "num_base_bdevs": 4, 00:12:18.000 "num_base_bdevs_discovered": 4, 00:12:18.000 "num_base_bdevs_operational": 4, 00:12:18.000 "base_bdevs_list": [ 00:12:18.000 { 00:12:18.000 "name": "BaseBdev1", 00:12:18.000 "uuid": "5a2b0ee1-8b4e-4c46-a0f3-7bc2758bae7f", 00:12:18.000 "is_configured": true, 00:12:18.000 "data_offset": 2048, 00:12:18.000 "data_size": 63488 00:12:18.000 }, 00:12:18.000 { 00:12:18.000 "name": "BaseBdev2", 00:12:18.000 "uuid": "4411f2f4-b098-4013-906f-a15b28921e3b", 00:12:18.000 "is_configured": true, 00:12:18.000 "data_offset": 2048, 00:12:18.000 "data_size": 63488 00:12:18.000 }, 00:12:18.000 { 00:12:18.000 "name": "BaseBdev3", 00:12:18.000 "uuid": "9be05ff1-8647-4dba-aa7a-580582285550", 00:12:18.000 "is_configured": true, 00:12:18.000 "data_offset": 2048, 00:12:18.000 "data_size": 63488 00:12:18.000 }, 00:12:18.000 { 00:12:18.000 "name": "BaseBdev4", 00:12:18.000 "uuid": "d2d86401-b992-4ea5-9e53-c5be70e18522", 00:12:18.000 "is_configured": true, 00:12:18.000 "data_offset": 2048, 00:12:18.000 "data_size": 63488 00:12:18.000 } 00:12:18.000 ] 00:12:18.000 }' 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.000 11:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.566 [2024-11-15 11:24:01.412071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.566 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.566 "name": "Existed_Raid", 00:12:18.566 "aliases": [ 00:12:18.566 "e69cd1ca-abbd-4a77-a8d3-e73160729dbd" 00:12:18.566 ], 00:12:18.566 "product_name": "Raid Volume", 00:12:18.566 "block_size": 512, 00:12:18.566 "num_blocks": 253952, 00:12:18.566 "uuid": "e69cd1ca-abbd-4a77-a8d3-e73160729dbd", 00:12:18.566 "assigned_rate_limits": { 00:12:18.566 "rw_ios_per_sec": 0, 00:12:18.567 "rw_mbytes_per_sec": 0, 00:12:18.567 "r_mbytes_per_sec": 0, 00:12:18.567 "w_mbytes_per_sec": 0 00:12:18.567 }, 00:12:18.567 "claimed": false, 00:12:18.567 "zoned": false, 00:12:18.567 "supported_io_types": { 00:12:18.567 "read": true, 00:12:18.567 "write": true, 00:12:18.567 "unmap": true, 00:12:18.567 "flush": true, 00:12:18.567 "reset": true, 00:12:18.567 "nvme_admin": false, 00:12:18.567 "nvme_io": false, 00:12:18.567 "nvme_io_md": false, 00:12:18.567 "write_zeroes": true, 00:12:18.567 "zcopy": false, 00:12:18.567 "get_zone_info": false, 00:12:18.567 "zone_management": false, 00:12:18.567 "zone_append": false, 00:12:18.567 "compare": false, 00:12:18.567 "compare_and_write": false, 00:12:18.567 "abort": false, 00:12:18.567 "seek_hole": false, 00:12:18.567 "seek_data": false, 00:12:18.567 "copy": false, 00:12:18.567 "nvme_iov_md": false 00:12:18.567 }, 00:12:18.567 "memory_domains": [ 00:12:18.567 { 00:12:18.567 "dma_device_id": "system", 00:12:18.567 "dma_device_type": 1 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.567 "dma_device_type": 2 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "dma_device_id": "system", 00:12:18.567 "dma_device_type": 1 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.567 "dma_device_type": 2 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "dma_device_id": "system", 00:12:18.567 "dma_device_type": 1 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.567 "dma_device_type": 2 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "dma_device_id": "system", 00:12:18.567 "dma_device_type": 1 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.567 "dma_device_type": 2 00:12:18.567 } 00:12:18.567 ], 00:12:18.567 "driver_specific": { 00:12:18.567 "raid": { 00:12:18.567 "uuid": "e69cd1ca-abbd-4a77-a8d3-e73160729dbd", 00:12:18.567 "strip_size_kb": 64, 00:12:18.567 "state": "online", 00:12:18.567 "raid_level": "concat", 00:12:18.567 "superblock": true, 00:12:18.567 "num_base_bdevs": 4, 00:12:18.567 "num_base_bdevs_discovered": 4, 00:12:18.567 "num_base_bdevs_operational": 4, 00:12:18.567 "base_bdevs_list": [ 00:12:18.567 { 00:12:18.567 "name": "BaseBdev1", 00:12:18.567 "uuid": "5a2b0ee1-8b4e-4c46-a0f3-7bc2758bae7f", 00:12:18.567 "is_configured": true, 00:12:18.567 "data_offset": 2048, 00:12:18.567 "data_size": 63488 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "name": "BaseBdev2", 00:12:18.567 "uuid": "4411f2f4-b098-4013-906f-a15b28921e3b", 00:12:18.567 "is_configured": true, 00:12:18.567 "data_offset": 2048, 00:12:18.567 "data_size": 63488 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "name": "BaseBdev3", 00:12:18.567 "uuid": "9be05ff1-8647-4dba-aa7a-580582285550", 00:12:18.567 "is_configured": true, 00:12:18.567 "data_offset": 2048, 00:12:18.567 "data_size": 63488 00:12:18.567 }, 00:12:18.567 { 00:12:18.567 "name": "BaseBdev4", 00:12:18.567 "uuid": "d2d86401-b992-4ea5-9e53-c5be70e18522", 00:12:18.567 "is_configured": true, 00:12:18.567 "data_offset": 2048, 00:12:18.567 "data_size": 63488 00:12:18.567 } 00:12:18.567 ] 00:12:18.567 } 00:12:18.567 } 00:12:18.567 }' 00:12:18.567 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.567 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:18.567 BaseBdev2 00:12:18.567 BaseBdev3 00:12:18.567 BaseBdev4' 00:12:18.567 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.825 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.083 [2024-11-15 11:24:01.791789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.083 [2024-11-15 11:24:01.791831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.083 [2024-11-15 11:24:01.791897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.083 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.083 "name": "Existed_Raid", 00:12:19.083 "uuid": "e69cd1ca-abbd-4a77-a8d3-e73160729dbd", 00:12:19.083 "strip_size_kb": 64, 00:12:19.083 "state": "offline", 00:12:19.083 "raid_level": "concat", 00:12:19.083 "superblock": true, 00:12:19.083 "num_base_bdevs": 4, 00:12:19.083 "num_base_bdevs_discovered": 3, 00:12:19.083 "num_base_bdevs_operational": 3, 00:12:19.083 "base_bdevs_list": [ 00:12:19.083 { 00:12:19.083 "name": null, 00:12:19.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.083 "is_configured": false, 00:12:19.083 "data_offset": 0, 00:12:19.083 "data_size": 63488 00:12:19.083 }, 00:12:19.083 { 00:12:19.083 "name": "BaseBdev2", 00:12:19.083 "uuid": "4411f2f4-b098-4013-906f-a15b28921e3b", 00:12:19.084 "is_configured": true, 00:12:19.084 "data_offset": 2048, 00:12:19.084 "data_size": 63488 00:12:19.084 }, 00:12:19.084 { 00:12:19.084 "name": "BaseBdev3", 00:12:19.084 "uuid": "9be05ff1-8647-4dba-aa7a-580582285550", 00:12:19.084 "is_configured": true, 00:12:19.084 "data_offset": 2048, 00:12:19.084 "data_size": 63488 00:12:19.084 }, 00:12:19.084 { 00:12:19.084 "name": "BaseBdev4", 00:12:19.084 "uuid": "d2d86401-b992-4ea5-9e53-c5be70e18522", 00:12:19.084 "is_configured": true, 00:12:19.084 "data_offset": 2048, 00:12:19.084 "data_size": 63488 00:12:19.084 } 00:12:19.084 ] 00:12:19.084 }' 00:12:19.084 11:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.084 11:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.652 [2024-11-15 11:24:02.445991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.652 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.652 [2024-11-15 11:24:02.587171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.909 [2024-11-15 11:24:02.737962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:19.909 [2024-11-15 11:24:02.738061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:19.909 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.167 BaseBdev2 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.167 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.167 [ 00:12:20.167 { 00:12:20.167 "name": "BaseBdev2", 00:12:20.167 "aliases": [ 00:12:20.167 "604da54e-3409-4835-bce2-c1b34773abf3" 00:12:20.167 ], 00:12:20.168 "product_name": "Malloc disk", 00:12:20.168 "block_size": 512, 00:12:20.168 "num_blocks": 65536, 00:12:20.168 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:20.168 "assigned_rate_limits": { 00:12:20.168 "rw_ios_per_sec": 0, 00:12:20.168 "rw_mbytes_per_sec": 0, 00:12:20.168 "r_mbytes_per_sec": 0, 00:12:20.168 "w_mbytes_per_sec": 0 00:12:20.168 }, 00:12:20.168 "claimed": false, 00:12:20.168 "zoned": false, 00:12:20.168 "supported_io_types": { 00:12:20.168 "read": true, 00:12:20.168 "write": true, 00:12:20.168 "unmap": true, 00:12:20.168 "flush": true, 00:12:20.168 "reset": true, 00:12:20.168 "nvme_admin": false, 00:12:20.168 "nvme_io": false, 00:12:20.168 "nvme_io_md": false, 00:12:20.168 "write_zeroes": true, 00:12:20.168 "zcopy": true, 00:12:20.168 "get_zone_info": false, 00:12:20.168 "zone_management": false, 00:12:20.168 "zone_append": false, 00:12:20.168 "compare": false, 00:12:20.168 "compare_and_write": false, 00:12:20.168 "abort": true, 00:12:20.168 "seek_hole": false, 00:12:20.168 "seek_data": false, 00:12:20.168 "copy": true, 00:12:20.168 "nvme_iov_md": false 00:12:20.168 }, 00:12:20.168 "memory_domains": [ 00:12:20.168 { 00:12:20.168 "dma_device_id": "system", 00:12:20.168 "dma_device_type": 1 00:12:20.168 }, 00:12:20.168 { 00:12:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.168 "dma_device_type": 2 00:12:20.168 } 00:12:20.168 ], 00:12:20.168 "driver_specific": {} 00:12:20.168 } 00:12:20.168 ] 00:12:20.168 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.168 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:20.168 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.168 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.168 11:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.168 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.168 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 BaseBdev3 00:12:20.168 11:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 [ 00:12:20.168 { 00:12:20.168 "name": "BaseBdev3", 00:12:20.168 "aliases": [ 00:12:20.168 "12b6cefe-017e-4c53-9257-277ab29d88d4" 00:12:20.168 ], 00:12:20.168 "product_name": "Malloc disk", 00:12:20.168 "block_size": 512, 00:12:20.168 "num_blocks": 65536, 00:12:20.168 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:20.168 "assigned_rate_limits": { 00:12:20.168 "rw_ios_per_sec": 0, 00:12:20.168 "rw_mbytes_per_sec": 0, 00:12:20.168 "r_mbytes_per_sec": 0, 00:12:20.168 "w_mbytes_per_sec": 0 00:12:20.168 }, 00:12:20.168 "claimed": false, 00:12:20.168 "zoned": false, 00:12:20.168 "supported_io_types": { 00:12:20.168 "read": true, 00:12:20.168 "write": true, 00:12:20.168 "unmap": true, 00:12:20.168 "flush": true, 00:12:20.168 "reset": true, 00:12:20.168 "nvme_admin": false, 00:12:20.168 "nvme_io": false, 00:12:20.168 "nvme_io_md": false, 00:12:20.168 "write_zeroes": true, 00:12:20.168 "zcopy": true, 00:12:20.168 "get_zone_info": false, 00:12:20.168 "zone_management": false, 00:12:20.168 "zone_append": false, 00:12:20.168 "compare": false, 00:12:20.168 "compare_and_write": false, 00:12:20.168 "abort": true, 00:12:20.168 "seek_hole": false, 00:12:20.168 "seek_data": false, 00:12:20.168 "copy": true, 00:12:20.168 "nvme_iov_md": false 00:12:20.168 }, 00:12:20.168 "memory_domains": [ 00:12:20.168 { 00:12:20.168 "dma_device_id": "system", 00:12:20.168 "dma_device_type": 1 00:12:20.168 }, 00:12:20.168 { 00:12:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.168 "dma_device_type": 2 00:12:20.168 } 00:12:20.168 ], 00:12:20.168 "driver_specific": {} 00:12:20.168 } 00:12:20.168 ] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 BaseBdev4 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 [ 00:12:20.168 { 00:12:20.168 "name": "BaseBdev4", 00:12:20.168 "aliases": [ 00:12:20.168 "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e" 00:12:20.168 ], 00:12:20.168 "product_name": "Malloc disk", 00:12:20.168 "block_size": 512, 00:12:20.168 "num_blocks": 65536, 00:12:20.168 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:20.168 "assigned_rate_limits": { 00:12:20.168 "rw_ios_per_sec": 0, 00:12:20.168 "rw_mbytes_per_sec": 0, 00:12:20.168 "r_mbytes_per_sec": 0, 00:12:20.168 "w_mbytes_per_sec": 0 00:12:20.168 }, 00:12:20.168 "claimed": false, 00:12:20.168 "zoned": false, 00:12:20.168 "supported_io_types": { 00:12:20.168 "read": true, 00:12:20.168 "write": true, 00:12:20.168 "unmap": true, 00:12:20.168 "flush": true, 00:12:20.168 "reset": true, 00:12:20.168 "nvme_admin": false, 00:12:20.168 "nvme_io": false, 00:12:20.168 "nvme_io_md": false, 00:12:20.168 "write_zeroes": true, 00:12:20.168 "zcopy": true, 00:12:20.168 "get_zone_info": false, 00:12:20.168 "zone_management": false, 00:12:20.168 "zone_append": false, 00:12:20.168 "compare": false, 00:12:20.168 "compare_and_write": false, 00:12:20.168 "abort": true, 00:12:20.168 "seek_hole": false, 00:12:20.168 "seek_data": false, 00:12:20.168 "copy": true, 00:12:20.168 "nvme_iov_md": false 00:12:20.168 }, 00:12:20.168 "memory_domains": [ 00:12:20.168 { 00:12:20.168 "dma_device_id": "system", 00:12:20.168 "dma_device_type": 1 00:12:20.168 }, 00:12:20.168 { 00:12:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.168 "dma_device_type": 2 00:12:20.168 } 00:12:20.168 ], 00:12:20.168 "driver_specific": {} 00:12:20.168 } 00:12:20.168 ] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.168 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 [2024-11-15 11:24:03.119685] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.427 [2024-11-15 11:24:03.119940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.427 [2024-11-15 11:24:03.120131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.427 [2024-11-15 11:24:03.122916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.427 [2024-11-15 11:24:03.123171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.427 "name": "Existed_Raid", 00:12:20.427 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:20.427 "strip_size_kb": 64, 00:12:20.427 "state": "configuring", 00:12:20.427 "raid_level": "concat", 00:12:20.427 "superblock": true, 00:12:20.427 "num_base_bdevs": 4, 00:12:20.427 "num_base_bdevs_discovered": 3, 00:12:20.427 "num_base_bdevs_operational": 4, 00:12:20.427 "base_bdevs_list": [ 00:12:20.427 { 00:12:20.427 "name": "BaseBdev1", 00:12:20.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.427 "is_configured": false, 00:12:20.427 "data_offset": 0, 00:12:20.427 "data_size": 0 00:12:20.427 }, 00:12:20.427 { 00:12:20.427 "name": "BaseBdev2", 00:12:20.427 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:20.427 "is_configured": true, 00:12:20.427 "data_offset": 2048, 00:12:20.427 "data_size": 63488 00:12:20.427 }, 00:12:20.427 { 00:12:20.427 "name": "BaseBdev3", 00:12:20.427 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:20.427 "is_configured": true, 00:12:20.427 "data_offset": 2048, 00:12:20.427 "data_size": 63488 00:12:20.427 }, 00:12:20.427 { 00:12:20.427 "name": "BaseBdev4", 00:12:20.427 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:20.427 "is_configured": true, 00:12:20.427 "data_offset": 2048, 00:12:20.427 "data_size": 63488 00:12:20.427 } 00:12:20.427 ] 00:12:20.427 }' 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.427 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.685 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:20.685 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.685 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.685 [2024-11-15 11:24:03.631875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.942 "name": "Existed_Raid", 00:12:20.942 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:20.942 "strip_size_kb": 64, 00:12:20.942 "state": "configuring", 00:12:20.942 "raid_level": "concat", 00:12:20.942 "superblock": true, 00:12:20.942 "num_base_bdevs": 4, 00:12:20.942 "num_base_bdevs_discovered": 2, 00:12:20.942 "num_base_bdevs_operational": 4, 00:12:20.942 "base_bdevs_list": [ 00:12:20.942 { 00:12:20.942 "name": "BaseBdev1", 00:12:20.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.942 "is_configured": false, 00:12:20.942 "data_offset": 0, 00:12:20.942 "data_size": 0 00:12:20.942 }, 00:12:20.942 { 00:12:20.942 "name": null, 00:12:20.942 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:20.942 "is_configured": false, 00:12:20.942 "data_offset": 0, 00:12:20.942 "data_size": 63488 00:12:20.942 }, 00:12:20.942 { 00:12:20.942 "name": "BaseBdev3", 00:12:20.942 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:20.942 "is_configured": true, 00:12:20.942 "data_offset": 2048, 00:12:20.942 "data_size": 63488 00:12:20.942 }, 00:12:20.942 { 00:12:20.942 "name": "BaseBdev4", 00:12:20.942 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:20.942 "is_configured": true, 00:12:20.942 "data_offset": 2048, 00:12:20.942 "data_size": 63488 00:12:20.942 } 00:12:20.942 ] 00:12:20.942 }' 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.942 11:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.257 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 [2024-11-15 11:24:04.228434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.530 BaseBdev1 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 [ 00:12:21.530 { 00:12:21.530 "name": "BaseBdev1", 00:12:21.530 "aliases": [ 00:12:21.530 "661effd1-2d62-4b4e-b63b-a6f703e8b255" 00:12:21.530 ], 00:12:21.530 "product_name": "Malloc disk", 00:12:21.530 "block_size": 512, 00:12:21.530 "num_blocks": 65536, 00:12:21.530 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:21.530 "assigned_rate_limits": { 00:12:21.530 "rw_ios_per_sec": 0, 00:12:21.530 "rw_mbytes_per_sec": 0, 00:12:21.530 "r_mbytes_per_sec": 0, 00:12:21.530 "w_mbytes_per_sec": 0 00:12:21.530 }, 00:12:21.530 "claimed": true, 00:12:21.530 "claim_type": "exclusive_write", 00:12:21.530 "zoned": false, 00:12:21.530 "supported_io_types": { 00:12:21.530 "read": true, 00:12:21.530 "write": true, 00:12:21.530 "unmap": true, 00:12:21.530 "flush": true, 00:12:21.530 "reset": true, 00:12:21.530 "nvme_admin": false, 00:12:21.530 "nvme_io": false, 00:12:21.530 "nvme_io_md": false, 00:12:21.530 "write_zeroes": true, 00:12:21.530 "zcopy": true, 00:12:21.530 "get_zone_info": false, 00:12:21.530 "zone_management": false, 00:12:21.530 "zone_append": false, 00:12:21.530 "compare": false, 00:12:21.530 "compare_and_write": false, 00:12:21.530 "abort": true, 00:12:21.530 "seek_hole": false, 00:12:21.530 "seek_data": false, 00:12:21.530 "copy": true, 00:12:21.530 "nvme_iov_md": false 00:12:21.530 }, 00:12:21.530 "memory_domains": [ 00:12:21.530 { 00:12:21.530 "dma_device_id": "system", 00:12:21.530 "dma_device_type": 1 00:12:21.530 }, 00:12:21.530 { 00:12:21.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.530 "dma_device_type": 2 00:12:21.530 } 00:12:21.530 ], 00:12:21.530 "driver_specific": {} 00:12:21.530 } 00:12:21.530 ] 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.530 "name": "Existed_Raid", 00:12:21.530 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:21.530 "strip_size_kb": 64, 00:12:21.530 "state": "configuring", 00:12:21.530 "raid_level": "concat", 00:12:21.530 "superblock": true, 00:12:21.530 "num_base_bdevs": 4, 00:12:21.530 "num_base_bdevs_discovered": 3, 00:12:21.530 "num_base_bdevs_operational": 4, 00:12:21.530 "base_bdevs_list": [ 00:12:21.530 { 00:12:21.530 "name": "BaseBdev1", 00:12:21.530 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:21.530 "is_configured": true, 00:12:21.530 "data_offset": 2048, 00:12:21.530 "data_size": 63488 00:12:21.530 }, 00:12:21.530 { 00:12:21.530 "name": null, 00:12:21.530 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:21.530 "is_configured": false, 00:12:21.530 "data_offset": 0, 00:12:21.530 "data_size": 63488 00:12:21.530 }, 00:12:21.530 { 00:12:21.530 "name": "BaseBdev3", 00:12:21.530 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:21.530 "is_configured": true, 00:12:21.530 "data_offset": 2048, 00:12:21.530 "data_size": 63488 00:12:21.530 }, 00:12:21.530 { 00:12:21.530 "name": "BaseBdev4", 00:12:21.530 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:21.530 "is_configured": true, 00:12:21.530 "data_offset": 2048, 00:12:21.530 "data_size": 63488 00:12:21.530 } 00:12:21.530 ] 00:12:21.530 }' 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.530 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.096 [2024-11-15 11:24:04.820740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.096 "name": "Existed_Raid", 00:12:22.096 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:22.096 "strip_size_kb": 64, 00:12:22.096 "state": "configuring", 00:12:22.096 "raid_level": "concat", 00:12:22.096 "superblock": true, 00:12:22.096 "num_base_bdevs": 4, 00:12:22.096 "num_base_bdevs_discovered": 2, 00:12:22.096 "num_base_bdevs_operational": 4, 00:12:22.096 "base_bdevs_list": [ 00:12:22.096 { 00:12:22.096 "name": "BaseBdev1", 00:12:22.096 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:22.096 "is_configured": true, 00:12:22.096 "data_offset": 2048, 00:12:22.096 "data_size": 63488 00:12:22.096 }, 00:12:22.096 { 00:12:22.096 "name": null, 00:12:22.096 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:22.096 "is_configured": false, 00:12:22.096 "data_offset": 0, 00:12:22.096 "data_size": 63488 00:12:22.096 }, 00:12:22.096 { 00:12:22.096 "name": null, 00:12:22.096 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:22.096 "is_configured": false, 00:12:22.096 "data_offset": 0, 00:12:22.096 "data_size": 63488 00:12:22.096 }, 00:12:22.096 { 00:12:22.096 "name": "BaseBdev4", 00:12:22.096 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:22.096 "is_configured": true, 00:12:22.096 "data_offset": 2048, 00:12:22.096 "data_size": 63488 00:12:22.096 } 00:12:22.096 ] 00:12:22.096 }' 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.096 11:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.663 [2024-11-15 11:24:05.388902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.663 "name": "Existed_Raid", 00:12:22.663 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:22.663 "strip_size_kb": 64, 00:12:22.663 "state": "configuring", 00:12:22.663 "raid_level": "concat", 00:12:22.663 "superblock": true, 00:12:22.663 "num_base_bdevs": 4, 00:12:22.663 "num_base_bdevs_discovered": 3, 00:12:22.663 "num_base_bdevs_operational": 4, 00:12:22.663 "base_bdevs_list": [ 00:12:22.663 { 00:12:22.663 "name": "BaseBdev1", 00:12:22.663 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:22.663 "is_configured": true, 00:12:22.663 "data_offset": 2048, 00:12:22.663 "data_size": 63488 00:12:22.663 }, 00:12:22.663 { 00:12:22.663 "name": null, 00:12:22.663 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:22.663 "is_configured": false, 00:12:22.663 "data_offset": 0, 00:12:22.663 "data_size": 63488 00:12:22.663 }, 00:12:22.663 { 00:12:22.663 "name": "BaseBdev3", 00:12:22.663 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:22.663 "is_configured": true, 00:12:22.663 "data_offset": 2048, 00:12:22.663 "data_size": 63488 00:12:22.663 }, 00:12:22.663 { 00:12:22.663 "name": "BaseBdev4", 00:12:22.663 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:22.663 "is_configured": true, 00:12:22.663 "data_offset": 2048, 00:12:22.663 "data_size": 63488 00:12:22.663 } 00:12:22.663 ] 00:12:22.663 }' 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.663 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.229 11:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.229 [2024-11-15 11:24:05.969109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.229 "name": "Existed_Raid", 00:12:23.229 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:23.229 "strip_size_kb": 64, 00:12:23.229 "state": "configuring", 00:12:23.229 "raid_level": "concat", 00:12:23.229 "superblock": true, 00:12:23.229 "num_base_bdevs": 4, 00:12:23.229 "num_base_bdevs_discovered": 2, 00:12:23.229 "num_base_bdevs_operational": 4, 00:12:23.229 "base_bdevs_list": [ 00:12:23.229 { 00:12:23.229 "name": null, 00:12:23.229 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:23.229 "is_configured": false, 00:12:23.229 "data_offset": 0, 00:12:23.229 "data_size": 63488 00:12:23.229 }, 00:12:23.229 { 00:12:23.229 "name": null, 00:12:23.229 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:23.229 "is_configured": false, 00:12:23.229 "data_offset": 0, 00:12:23.229 "data_size": 63488 00:12:23.229 }, 00:12:23.229 { 00:12:23.229 "name": "BaseBdev3", 00:12:23.229 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:23.229 "is_configured": true, 00:12:23.229 "data_offset": 2048, 00:12:23.229 "data_size": 63488 00:12:23.229 }, 00:12:23.229 { 00:12:23.229 "name": "BaseBdev4", 00:12:23.229 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:23.229 "is_configured": true, 00:12:23.229 "data_offset": 2048, 00:12:23.229 "data_size": 63488 00:12:23.229 } 00:12:23.229 ] 00:12:23.229 }' 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.229 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.796 [2024-11-15 11:24:06.620481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.796 "name": "Existed_Raid", 00:12:23.796 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:23.796 "strip_size_kb": 64, 00:12:23.796 "state": "configuring", 00:12:23.796 "raid_level": "concat", 00:12:23.796 "superblock": true, 00:12:23.796 "num_base_bdevs": 4, 00:12:23.796 "num_base_bdevs_discovered": 3, 00:12:23.796 "num_base_bdevs_operational": 4, 00:12:23.796 "base_bdevs_list": [ 00:12:23.796 { 00:12:23.796 "name": null, 00:12:23.796 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:23.796 "is_configured": false, 00:12:23.796 "data_offset": 0, 00:12:23.796 "data_size": 63488 00:12:23.796 }, 00:12:23.796 { 00:12:23.796 "name": "BaseBdev2", 00:12:23.796 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:23.796 "is_configured": true, 00:12:23.796 "data_offset": 2048, 00:12:23.796 "data_size": 63488 00:12:23.796 }, 00:12:23.796 { 00:12:23.796 "name": "BaseBdev3", 00:12:23.796 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:23.796 "is_configured": true, 00:12:23.796 "data_offset": 2048, 00:12:23.796 "data_size": 63488 00:12:23.796 }, 00:12:23.796 { 00:12:23.796 "name": "BaseBdev4", 00:12:23.796 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:23.796 "is_configured": true, 00:12:23.796 "data_offset": 2048, 00:12:23.796 "data_size": 63488 00:12:23.796 } 00:12:23.796 ] 00:12:23.796 }' 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.796 11:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 661effd1-2d62-4b4e-b63b-a6f703e8b255 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.362 [2024-11-15 11:24:07.270983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:24.362 [2024-11-15 11:24:07.271532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:24.362 [2024-11-15 11:24:07.271558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:24.362 NewBaseBdev 00:12:24.362 [2024-11-15 11:24:07.271940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:24.362 [2024-11-15 11:24:07.272145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:24.362 [2024-11-15 11:24:07.272172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:24.362 [2024-11-15 11:24:07.272347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.362 [ 00:12:24.362 { 00:12:24.362 "name": "NewBaseBdev", 00:12:24.362 "aliases": [ 00:12:24.362 "661effd1-2d62-4b4e-b63b-a6f703e8b255" 00:12:24.362 ], 00:12:24.362 "product_name": "Malloc disk", 00:12:24.362 "block_size": 512, 00:12:24.362 "num_blocks": 65536, 00:12:24.362 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:24.362 "assigned_rate_limits": { 00:12:24.362 "rw_ios_per_sec": 0, 00:12:24.362 "rw_mbytes_per_sec": 0, 00:12:24.362 "r_mbytes_per_sec": 0, 00:12:24.362 "w_mbytes_per_sec": 0 00:12:24.362 }, 00:12:24.362 "claimed": true, 00:12:24.362 "claim_type": "exclusive_write", 00:12:24.362 "zoned": false, 00:12:24.362 "supported_io_types": { 00:12:24.362 "read": true, 00:12:24.362 "write": true, 00:12:24.362 "unmap": true, 00:12:24.362 "flush": true, 00:12:24.362 "reset": true, 00:12:24.362 "nvme_admin": false, 00:12:24.362 "nvme_io": false, 00:12:24.362 "nvme_io_md": false, 00:12:24.362 "write_zeroes": true, 00:12:24.362 "zcopy": true, 00:12:24.362 "get_zone_info": false, 00:12:24.362 "zone_management": false, 00:12:24.362 "zone_append": false, 00:12:24.362 "compare": false, 00:12:24.362 "compare_and_write": false, 00:12:24.362 "abort": true, 00:12:24.362 "seek_hole": false, 00:12:24.362 "seek_data": false, 00:12:24.362 "copy": true, 00:12:24.362 "nvme_iov_md": false 00:12:24.362 }, 00:12:24.362 "memory_domains": [ 00:12:24.362 { 00:12:24.362 "dma_device_id": "system", 00:12:24.362 "dma_device_type": 1 00:12:24.362 }, 00:12:24.362 { 00:12:24.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.362 "dma_device_type": 2 00:12:24.362 } 00:12:24.362 ], 00:12:24.362 "driver_specific": {} 00:12:24.362 } 00:12:24.362 ] 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.362 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.621 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.621 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.621 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.621 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.621 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.621 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.621 "name": "Existed_Raid", 00:12:24.621 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:24.621 "strip_size_kb": 64, 00:12:24.621 "state": "online", 00:12:24.621 "raid_level": "concat", 00:12:24.621 "superblock": true, 00:12:24.621 "num_base_bdevs": 4, 00:12:24.621 "num_base_bdevs_discovered": 4, 00:12:24.621 "num_base_bdevs_operational": 4, 00:12:24.621 "base_bdevs_list": [ 00:12:24.621 { 00:12:24.621 "name": "NewBaseBdev", 00:12:24.621 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:24.621 "is_configured": true, 00:12:24.621 "data_offset": 2048, 00:12:24.621 "data_size": 63488 00:12:24.621 }, 00:12:24.621 { 00:12:24.621 "name": "BaseBdev2", 00:12:24.621 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:24.621 "is_configured": true, 00:12:24.621 "data_offset": 2048, 00:12:24.621 "data_size": 63488 00:12:24.621 }, 00:12:24.621 { 00:12:24.621 "name": "BaseBdev3", 00:12:24.621 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:24.621 "is_configured": true, 00:12:24.621 "data_offset": 2048, 00:12:24.621 "data_size": 63488 00:12:24.621 }, 00:12:24.621 { 00:12:24.621 "name": "BaseBdev4", 00:12:24.621 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:24.621 "is_configured": true, 00:12:24.621 "data_offset": 2048, 00:12:24.621 "data_size": 63488 00:12:24.621 } 00:12:24.621 ] 00:12:24.621 }' 00:12:24.621 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.621 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.879 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:24.879 [2024-11-15 11:24:07.823795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.137 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.137 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.137 "name": "Existed_Raid", 00:12:25.137 "aliases": [ 00:12:25.137 "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca" 00:12:25.137 ], 00:12:25.137 "product_name": "Raid Volume", 00:12:25.137 "block_size": 512, 00:12:25.137 "num_blocks": 253952, 00:12:25.137 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:25.137 "assigned_rate_limits": { 00:12:25.137 "rw_ios_per_sec": 0, 00:12:25.137 "rw_mbytes_per_sec": 0, 00:12:25.137 "r_mbytes_per_sec": 0, 00:12:25.137 "w_mbytes_per_sec": 0 00:12:25.137 }, 00:12:25.137 "claimed": false, 00:12:25.137 "zoned": false, 00:12:25.137 "supported_io_types": { 00:12:25.137 "read": true, 00:12:25.138 "write": true, 00:12:25.138 "unmap": true, 00:12:25.138 "flush": true, 00:12:25.138 "reset": true, 00:12:25.138 "nvme_admin": false, 00:12:25.138 "nvme_io": false, 00:12:25.138 "nvme_io_md": false, 00:12:25.138 "write_zeroes": true, 00:12:25.138 "zcopy": false, 00:12:25.138 "get_zone_info": false, 00:12:25.138 "zone_management": false, 00:12:25.138 "zone_append": false, 00:12:25.138 "compare": false, 00:12:25.138 "compare_and_write": false, 00:12:25.138 "abort": false, 00:12:25.138 "seek_hole": false, 00:12:25.138 "seek_data": false, 00:12:25.138 "copy": false, 00:12:25.138 "nvme_iov_md": false 00:12:25.138 }, 00:12:25.138 "memory_domains": [ 00:12:25.138 { 00:12:25.138 "dma_device_id": "system", 00:12:25.138 "dma_device_type": 1 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.138 "dma_device_type": 2 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "dma_device_id": "system", 00:12:25.138 "dma_device_type": 1 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.138 "dma_device_type": 2 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "dma_device_id": "system", 00:12:25.138 "dma_device_type": 1 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.138 "dma_device_type": 2 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "dma_device_id": "system", 00:12:25.138 "dma_device_type": 1 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.138 "dma_device_type": 2 00:12:25.138 } 00:12:25.138 ], 00:12:25.138 "driver_specific": { 00:12:25.138 "raid": { 00:12:25.138 "uuid": "cd0dcc49-9ed8-4fc4-91d1-6b4408258bca", 00:12:25.138 "strip_size_kb": 64, 00:12:25.138 "state": "online", 00:12:25.138 "raid_level": "concat", 00:12:25.138 "superblock": true, 00:12:25.138 "num_base_bdevs": 4, 00:12:25.138 "num_base_bdevs_discovered": 4, 00:12:25.138 "num_base_bdevs_operational": 4, 00:12:25.138 "base_bdevs_list": [ 00:12:25.138 { 00:12:25.138 "name": "NewBaseBdev", 00:12:25.138 "uuid": "661effd1-2d62-4b4e-b63b-a6f703e8b255", 00:12:25.138 "is_configured": true, 00:12:25.138 "data_offset": 2048, 00:12:25.138 "data_size": 63488 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "name": "BaseBdev2", 00:12:25.138 "uuid": "604da54e-3409-4835-bce2-c1b34773abf3", 00:12:25.138 "is_configured": true, 00:12:25.138 "data_offset": 2048, 00:12:25.138 "data_size": 63488 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "name": "BaseBdev3", 00:12:25.138 "uuid": "12b6cefe-017e-4c53-9257-277ab29d88d4", 00:12:25.138 "is_configured": true, 00:12:25.138 "data_offset": 2048, 00:12:25.138 "data_size": 63488 00:12:25.138 }, 00:12:25.138 { 00:12:25.138 "name": "BaseBdev4", 00:12:25.138 "uuid": "b81a56be-3219-4d90-8a98-1fa7bd0d2a7e", 00:12:25.138 "is_configured": true, 00:12:25.138 "data_offset": 2048, 00:12:25.138 "data_size": 63488 00:12:25.138 } 00:12:25.138 ] 00:12:25.138 } 00:12:25.138 } 00:12:25.138 }' 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:25.138 BaseBdev2 00:12:25.138 BaseBdev3 00:12:25.138 BaseBdev4' 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.138 11:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.138 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.396 [2024-11-15 11:24:08.187396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:25.396 [2024-11-15 11:24:08.187442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.396 [2024-11-15 11:24:08.187601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.396 [2024-11-15 11:24:08.187694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.396 [2024-11-15 11:24:08.187710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71952 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 71952 ']' 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 71952 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71952 00:12:25.396 killing process with pid 71952 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71952' 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 71952 00:12:25.396 [2024-11-15 11:24:08.224067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.396 11:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 71952 00:12:25.654 [2024-11-15 11:24:08.561643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.031 11:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:27.031 00:12:27.031 real 0m12.854s 00:12:27.031 user 0m21.244s 00:12:27.031 sys 0m1.870s 00:12:27.031 ************************************ 00:12:27.031 END TEST raid_state_function_test_sb 00:12:27.031 ************************************ 00:12:27.031 11:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:27.031 11:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.031 11:24:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:27.031 11:24:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:27.031 11:24:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:27.031 11:24:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.031 ************************************ 00:12:27.031 START TEST raid_superblock_test 00:12:27.031 ************************************ 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72635 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72635 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72635 ']' 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:27.031 11:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.031 [2024-11-15 11:24:09.824297] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:12:27.031 [2024-11-15 11:24:09.824583] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72635 ] 00:12:27.290 [2024-11-15 11:24:10.038964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.290 [2024-11-15 11:24:10.164754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.548 [2024-11-15 11:24:10.371946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.548 [2024-11-15 11:24:10.372022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 malloc1 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 [2024-11-15 11:24:10.825281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:28.117 [2024-11-15 11:24:10.825530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.117 [2024-11-15 11:24:10.825817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:28.117 [2024-11-15 11:24:10.825845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.117 [2024-11-15 11:24:10.828855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.117 [2024-11-15 11:24:10.828900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:28.117 pt1 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 malloc2 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 [2024-11-15 11:24:10.883020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.117 [2024-11-15 11:24:10.883100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.117 [2024-11-15 11:24:10.883137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:28.117 [2024-11-15 11:24:10.883152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.117 [2024-11-15 11:24:10.886577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.117 [2024-11-15 11:24:10.886822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.117 pt2 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 malloc3 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 [2024-11-15 11:24:10.946950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:28.117 [2024-11-15 11:24:10.947032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.117 [2024-11-15 11:24:10.947065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:28.117 [2024-11-15 11:24:10.947080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.117 [2024-11-15 11:24:10.950513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.117 pt3 00:12:28.117 [2024-11-15 11:24:10.950725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:28.117 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.118 malloc4 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.118 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.118 [2024-11-15 11:24:11.006209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:28.118 [2024-11-15 11:24:11.006279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.118 [2024-11-15 11:24:11.006314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:28.118 [2024-11-15 11:24:11.006331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.118 [2024-11-15 11:24:11.009388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.118 [2024-11-15 11:24:11.009435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:28.118 pt4 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.118 [2024-11-15 11:24:11.014335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:28.118 [2024-11-15 11:24:11.017065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.118 [2024-11-15 11:24:11.017362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:28.118 [2024-11-15 11:24:11.017458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:28.118 [2024-11-15 11:24:11.017751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:28.118 [2024-11-15 11:24:11.017769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:28.118 [2024-11-15 11:24:11.018105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:28.118 [2024-11-15 11:24:11.018355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:28.118 [2024-11-15 11:24:11.018379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:28.118 [2024-11-15 11:24:11.018626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.118 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.384 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.384 "name": "raid_bdev1", 00:12:28.384 "uuid": "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c", 00:12:28.384 "strip_size_kb": 64, 00:12:28.384 "state": "online", 00:12:28.384 "raid_level": "concat", 00:12:28.384 "superblock": true, 00:12:28.384 "num_base_bdevs": 4, 00:12:28.384 "num_base_bdevs_discovered": 4, 00:12:28.384 "num_base_bdevs_operational": 4, 00:12:28.384 "base_bdevs_list": [ 00:12:28.384 { 00:12:28.384 "name": "pt1", 00:12:28.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.384 "is_configured": true, 00:12:28.384 "data_offset": 2048, 00:12:28.384 "data_size": 63488 00:12:28.384 }, 00:12:28.384 { 00:12:28.384 "name": "pt2", 00:12:28.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.384 "is_configured": true, 00:12:28.384 "data_offset": 2048, 00:12:28.384 "data_size": 63488 00:12:28.384 }, 00:12:28.384 { 00:12:28.384 "name": "pt3", 00:12:28.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.384 "is_configured": true, 00:12:28.384 "data_offset": 2048, 00:12:28.384 "data_size": 63488 00:12:28.384 }, 00:12:28.384 { 00:12:28.384 "name": "pt4", 00:12:28.384 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.384 "is_configured": true, 00:12:28.384 "data_offset": 2048, 00:12:28.384 "data_size": 63488 00:12:28.384 } 00:12:28.384 ] 00:12:28.384 }' 00:12:28.384 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.385 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.656 [2024-11-15 11:24:11.559270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.656 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.914 "name": "raid_bdev1", 00:12:28.914 "aliases": [ 00:12:28.914 "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c" 00:12:28.914 ], 00:12:28.914 "product_name": "Raid Volume", 00:12:28.914 "block_size": 512, 00:12:28.914 "num_blocks": 253952, 00:12:28.914 "uuid": "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c", 00:12:28.914 "assigned_rate_limits": { 00:12:28.914 "rw_ios_per_sec": 0, 00:12:28.914 "rw_mbytes_per_sec": 0, 00:12:28.914 "r_mbytes_per_sec": 0, 00:12:28.914 "w_mbytes_per_sec": 0 00:12:28.914 }, 00:12:28.914 "claimed": false, 00:12:28.914 "zoned": false, 00:12:28.914 "supported_io_types": { 00:12:28.914 "read": true, 00:12:28.914 "write": true, 00:12:28.914 "unmap": true, 00:12:28.914 "flush": true, 00:12:28.914 "reset": true, 00:12:28.914 "nvme_admin": false, 00:12:28.914 "nvme_io": false, 00:12:28.914 "nvme_io_md": false, 00:12:28.914 "write_zeroes": true, 00:12:28.914 "zcopy": false, 00:12:28.914 "get_zone_info": false, 00:12:28.914 "zone_management": false, 00:12:28.914 "zone_append": false, 00:12:28.914 "compare": false, 00:12:28.914 "compare_and_write": false, 00:12:28.914 "abort": false, 00:12:28.914 "seek_hole": false, 00:12:28.914 "seek_data": false, 00:12:28.914 "copy": false, 00:12:28.914 "nvme_iov_md": false 00:12:28.914 }, 00:12:28.914 "memory_domains": [ 00:12:28.914 { 00:12:28.914 "dma_device_id": "system", 00:12:28.914 "dma_device_type": 1 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.914 "dma_device_type": 2 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "dma_device_id": "system", 00:12:28.914 "dma_device_type": 1 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.914 "dma_device_type": 2 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "dma_device_id": "system", 00:12:28.914 "dma_device_type": 1 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.914 "dma_device_type": 2 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "dma_device_id": "system", 00:12:28.914 "dma_device_type": 1 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.914 "dma_device_type": 2 00:12:28.914 } 00:12:28.914 ], 00:12:28.914 "driver_specific": { 00:12:28.914 "raid": { 00:12:28.914 "uuid": "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c", 00:12:28.914 "strip_size_kb": 64, 00:12:28.914 "state": "online", 00:12:28.914 "raid_level": "concat", 00:12:28.914 "superblock": true, 00:12:28.914 "num_base_bdevs": 4, 00:12:28.914 "num_base_bdevs_discovered": 4, 00:12:28.914 "num_base_bdevs_operational": 4, 00:12:28.914 "base_bdevs_list": [ 00:12:28.914 { 00:12:28.914 "name": "pt1", 00:12:28.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.914 "is_configured": true, 00:12:28.914 "data_offset": 2048, 00:12:28.914 "data_size": 63488 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "name": "pt2", 00:12:28.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.914 "is_configured": true, 00:12:28.914 "data_offset": 2048, 00:12:28.914 "data_size": 63488 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "name": "pt3", 00:12:28.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.914 "is_configured": true, 00:12:28.914 "data_offset": 2048, 00:12:28.914 "data_size": 63488 00:12:28.914 }, 00:12:28.914 { 00:12:28.914 "name": "pt4", 00:12:28.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.914 "is_configured": true, 00:12:28.914 "data_offset": 2048, 00:12:28.914 "data_size": 63488 00:12:28.914 } 00:12:28.914 ] 00:12:28.914 } 00:12:28.914 } 00:12:28.914 }' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:28.914 pt2 00:12:28.914 pt3 00:12:28.914 pt4' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.914 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.172 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.173 [2024-11-15 11:24:11.947229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5c7dd2d9-c0d5-442c-8a13-9cb51292d60c 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5c7dd2d9-c0d5-442c-8a13-9cb51292d60c ']' 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.173 [2024-11-15 11:24:11.994875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.173 [2024-11-15 11:24:11.995062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.173 [2024-11-15 11:24:11.995215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.173 [2024-11-15 11:24:11.995314] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.173 [2024-11-15 11:24:11.995340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:29.173 11:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:29.173 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.432 [2024-11-15 11:24:12.154950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:29.432 [2024-11-15 11:24:12.157828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:29.432 [2024-11-15 11:24:12.157893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:29.432 [2024-11-15 11:24:12.157949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:29.432 [2024-11-15 11:24:12.158026] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:29.432 [2024-11-15 11:24:12.158157] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:29.432 [2024-11-15 11:24:12.158395] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:29.432 [2024-11-15 11:24:12.158535] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:29.432 [2024-11-15 11:24:12.158577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.432 [2024-11-15 11:24:12.158593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:29.432 request: 00:12:29.432 { 00:12:29.432 "name": "raid_bdev1", 00:12:29.432 "raid_level": "concat", 00:12:29.432 "base_bdevs": [ 00:12:29.432 "malloc1", 00:12:29.432 "malloc2", 00:12:29.432 "malloc3", 00:12:29.432 "malloc4" 00:12:29.432 ], 00:12:29.432 "strip_size_kb": 64, 00:12:29.432 "superblock": false, 00:12:29.432 "method": "bdev_raid_create", 00:12:29.432 "req_id": 1 00:12:29.432 } 00:12:29.432 Got JSON-RPC error response 00:12:29.432 response: 00:12:29.432 { 00:12:29.432 "code": -17, 00:12:29.432 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:29.432 } 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.432 [2024-11-15 11:24:12.223066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:29.432 [2024-11-15 11:24:12.223288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.432 [2024-11-15 11:24:12.223365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:29.432 [2024-11-15 11:24:12.223548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.432 [2024-11-15 11:24:12.226733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.432 [2024-11-15 11:24:12.226799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:29.432 [2024-11-15 11:24:12.226900] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:29.432 [2024-11-15 11:24:12.226971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:29.432 pt1 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.432 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.432 "name": "raid_bdev1", 00:12:29.432 "uuid": "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c", 00:12:29.432 "strip_size_kb": 64, 00:12:29.432 "state": "configuring", 00:12:29.432 "raid_level": "concat", 00:12:29.432 "superblock": true, 00:12:29.432 "num_base_bdevs": 4, 00:12:29.432 "num_base_bdevs_discovered": 1, 00:12:29.432 "num_base_bdevs_operational": 4, 00:12:29.432 "base_bdevs_list": [ 00:12:29.432 { 00:12:29.432 "name": "pt1", 00:12:29.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.432 "is_configured": true, 00:12:29.432 "data_offset": 2048, 00:12:29.432 "data_size": 63488 00:12:29.432 }, 00:12:29.432 { 00:12:29.432 "name": null, 00:12:29.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.432 "is_configured": false, 00:12:29.432 "data_offset": 2048, 00:12:29.432 "data_size": 63488 00:12:29.432 }, 00:12:29.432 { 00:12:29.432 "name": null, 00:12:29.432 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.432 "is_configured": false, 00:12:29.432 "data_offset": 2048, 00:12:29.433 "data_size": 63488 00:12:29.433 }, 00:12:29.433 { 00:12:29.433 "name": null, 00:12:29.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.433 "is_configured": false, 00:12:29.433 "data_offset": 2048, 00:12:29.433 "data_size": 63488 00:12:29.433 } 00:12:29.433 ] 00:12:29.433 }' 00:12:29.433 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.433 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 [2024-11-15 11:24:12.743483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:30.001 [2024-11-15 11:24:12.743632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.001 [2024-11-15 11:24:12.743665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:30.001 [2024-11-15 11:24:12.743691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.001 [2024-11-15 11:24:12.744356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.001 [2024-11-15 11:24:12.744395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:30.001 [2024-11-15 11:24:12.744507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:30.001 [2024-11-15 11:24:12.744556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.001 pt2 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 [2024-11-15 11:24:12.751477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.001 "name": "raid_bdev1", 00:12:30.001 "uuid": "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c", 00:12:30.001 "strip_size_kb": 64, 00:12:30.001 "state": "configuring", 00:12:30.001 "raid_level": "concat", 00:12:30.001 "superblock": true, 00:12:30.001 "num_base_bdevs": 4, 00:12:30.001 "num_base_bdevs_discovered": 1, 00:12:30.001 "num_base_bdevs_operational": 4, 00:12:30.001 "base_bdevs_list": [ 00:12:30.001 { 00:12:30.001 "name": "pt1", 00:12:30.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.001 "is_configured": true, 00:12:30.001 "data_offset": 2048, 00:12:30.001 "data_size": 63488 00:12:30.001 }, 00:12:30.001 { 00:12:30.001 "name": null, 00:12:30.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.001 "is_configured": false, 00:12:30.001 "data_offset": 0, 00:12:30.001 "data_size": 63488 00:12:30.001 }, 00:12:30.001 { 00:12:30.001 "name": null, 00:12:30.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.001 "is_configured": false, 00:12:30.001 "data_offset": 2048, 00:12:30.001 "data_size": 63488 00:12:30.001 }, 00:12:30.001 { 00:12:30.001 "name": null, 00:12:30.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.001 "is_configured": false, 00:12:30.001 "data_offset": 2048, 00:12:30.001 "data_size": 63488 00:12:30.001 } 00:12:30.001 ] 00:12:30.001 }' 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.001 11:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.567 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:30.567 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:30.567 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.568 [2024-11-15 11:24:13.263704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:30.568 [2024-11-15 11:24:13.263802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.568 [2024-11-15 11:24:13.263837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:30.568 [2024-11-15 11:24:13.263852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.568 [2024-11-15 11:24:13.264551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.568 [2024-11-15 11:24:13.264606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:30.568 [2024-11-15 11:24:13.264719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:30.568 [2024-11-15 11:24:13.264753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.568 pt2 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.568 [2024-11-15 11:24:13.271608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:30.568 [2024-11-15 11:24:13.271680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.568 [2024-11-15 11:24:13.271716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:30.568 [2024-11-15 11:24:13.271730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.568 [2024-11-15 11:24:13.272222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.568 [2024-11-15 11:24:13.272267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:30.568 [2024-11-15 11:24:13.272351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:30.568 [2024-11-15 11:24:13.272389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.568 pt3 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.568 [2024-11-15 11:24:13.279598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:30.568 [2024-11-15 11:24:13.279680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.568 [2024-11-15 11:24:13.279705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:30.568 [2024-11-15 11:24:13.279718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.568 [2024-11-15 11:24:13.280217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.568 [2024-11-15 11:24:13.280259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:30.568 [2024-11-15 11:24:13.280342] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:30.568 [2024-11-15 11:24:13.280371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:30.568 [2024-11-15 11:24:13.280581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:30.568 [2024-11-15 11:24:13.280605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:30.568 [2024-11-15 11:24:13.280921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:30.568 [2024-11-15 11:24:13.281133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:30.568 [2024-11-15 11:24:13.281163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:30.568 [2024-11-15 11:24:13.281369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.568 pt4 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.568 "name": "raid_bdev1", 00:12:30.568 "uuid": "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c", 00:12:30.568 "strip_size_kb": 64, 00:12:30.568 "state": "online", 00:12:30.568 "raid_level": "concat", 00:12:30.568 "superblock": true, 00:12:30.568 "num_base_bdevs": 4, 00:12:30.568 "num_base_bdevs_discovered": 4, 00:12:30.568 "num_base_bdevs_operational": 4, 00:12:30.568 "base_bdevs_list": [ 00:12:30.568 { 00:12:30.568 "name": "pt1", 00:12:30.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.568 "is_configured": true, 00:12:30.568 "data_offset": 2048, 00:12:30.568 "data_size": 63488 00:12:30.568 }, 00:12:30.568 { 00:12:30.568 "name": "pt2", 00:12:30.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.568 "is_configured": true, 00:12:30.568 "data_offset": 2048, 00:12:30.568 "data_size": 63488 00:12:30.568 }, 00:12:30.568 { 00:12:30.568 "name": "pt3", 00:12:30.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.568 "is_configured": true, 00:12:30.568 "data_offset": 2048, 00:12:30.568 "data_size": 63488 00:12:30.568 }, 00:12:30.568 { 00:12:30.568 "name": "pt4", 00:12:30.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.568 "is_configured": true, 00:12:30.568 "data_offset": 2048, 00:12:30.568 "data_size": 63488 00:12:30.568 } 00:12:30.568 ] 00:12:30.568 }' 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.568 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.133 [2024-11-15 11:24:13.812375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.133 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.133 "name": "raid_bdev1", 00:12:31.133 "aliases": [ 00:12:31.133 "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c" 00:12:31.133 ], 00:12:31.133 "product_name": "Raid Volume", 00:12:31.133 "block_size": 512, 00:12:31.133 "num_blocks": 253952, 00:12:31.133 "uuid": "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c", 00:12:31.133 "assigned_rate_limits": { 00:12:31.133 "rw_ios_per_sec": 0, 00:12:31.133 "rw_mbytes_per_sec": 0, 00:12:31.133 "r_mbytes_per_sec": 0, 00:12:31.134 "w_mbytes_per_sec": 0 00:12:31.134 }, 00:12:31.134 "claimed": false, 00:12:31.134 "zoned": false, 00:12:31.134 "supported_io_types": { 00:12:31.134 "read": true, 00:12:31.134 "write": true, 00:12:31.134 "unmap": true, 00:12:31.134 "flush": true, 00:12:31.134 "reset": true, 00:12:31.134 "nvme_admin": false, 00:12:31.134 "nvme_io": false, 00:12:31.134 "nvme_io_md": false, 00:12:31.134 "write_zeroes": true, 00:12:31.134 "zcopy": false, 00:12:31.134 "get_zone_info": false, 00:12:31.134 "zone_management": false, 00:12:31.134 "zone_append": false, 00:12:31.134 "compare": false, 00:12:31.134 "compare_and_write": false, 00:12:31.134 "abort": false, 00:12:31.134 "seek_hole": false, 00:12:31.134 "seek_data": false, 00:12:31.134 "copy": false, 00:12:31.134 "nvme_iov_md": false 00:12:31.134 }, 00:12:31.134 "memory_domains": [ 00:12:31.134 { 00:12:31.134 "dma_device_id": "system", 00:12:31.134 "dma_device_type": 1 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.134 "dma_device_type": 2 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "dma_device_id": "system", 00:12:31.134 "dma_device_type": 1 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.134 "dma_device_type": 2 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "dma_device_id": "system", 00:12:31.134 "dma_device_type": 1 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.134 "dma_device_type": 2 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "dma_device_id": "system", 00:12:31.134 "dma_device_type": 1 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.134 "dma_device_type": 2 00:12:31.134 } 00:12:31.134 ], 00:12:31.134 "driver_specific": { 00:12:31.134 "raid": { 00:12:31.134 "uuid": "5c7dd2d9-c0d5-442c-8a13-9cb51292d60c", 00:12:31.134 "strip_size_kb": 64, 00:12:31.134 "state": "online", 00:12:31.134 "raid_level": "concat", 00:12:31.134 "superblock": true, 00:12:31.134 "num_base_bdevs": 4, 00:12:31.134 "num_base_bdevs_discovered": 4, 00:12:31.134 "num_base_bdevs_operational": 4, 00:12:31.134 "base_bdevs_list": [ 00:12:31.134 { 00:12:31.134 "name": "pt1", 00:12:31.134 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.134 "is_configured": true, 00:12:31.134 "data_offset": 2048, 00:12:31.134 "data_size": 63488 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "name": "pt2", 00:12:31.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.134 "is_configured": true, 00:12:31.134 "data_offset": 2048, 00:12:31.134 "data_size": 63488 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "name": "pt3", 00:12:31.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.134 "is_configured": true, 00:12:31.134 "data_offset": 2048, 00:12:31.134 "data_size": 63488 00:12:31.134 }, 00:12:31.134 { 00:12:31.134 "name": "pt4", 00:12:31.134 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.134 "is_configured": true, 00:12:31.134 "data_offset": 2048, 00:12:31.134 "data_size": 63488 00:12:31.134 } 00:12:31.134 ] 00:12:31.134 } 00:12:31.134 } 00:12:31.134 }' 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:31.134 pt2 00:12:31.134 pt3 00:12:31.134 pt4' 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.134 11:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.134 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:31.393 [2024-11-15 11:24:14.172293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5c7dd2d9-c0d5-442c-8a13-9cb51292d60c '!=' 5c7dd2d9-c0d5-442c-8a13-9cb51292d60c ']' 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72635 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72635 ']' 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72635 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72635 00:12:31.393 killing process with pid 72635 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72635' 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72635 00:12:31.393 [2024-11-15 11:24:14.257807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.393 11:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72635 00:12:31.393 [2024-11-15 11:24:14.257903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.393 [2024-11-15 11:24:14.258003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.393 [2024-11-15 11:24:14.258019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:31.651 [2024-11-15 11:24:14.592866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.039 11:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:33.039 00:12:33.039 real 0m6.003s 00:12:33.039 user 0m8.925s 00:12:33.039 sys 0m0.965s 00:12:33.039 11:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:33.039 ************************************ 00:12:33.039 END TEST raid_superblock_test 00:12:33.039 ************************************ 00:12:33.039 11:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 11:24:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:33.039 11:24:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:33.039 11:24:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:33.040 11:24:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.040 ************************************ 00:12:33.040 START TEST raid_read_error_test 00:12:33.040 ************************************ 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mEZ6vdsUkU 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72900 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72900 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 72900 ']' 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:33.040 11:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.040 [2024-11-15 11:24:15.873979] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:12:33.040 [2024-11-15 11:24:15.874480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72900 ] 00:12:33.298 [2024-11-15 11:24:16.062220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.298 [2024-11-15 11:24:16.210467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.557 [2024-11-15 11:24:16.443778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.557 [2024-11-15 11:24:16.443871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.125 BaseBdev1_malloc 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.125 true 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.125 [2024-11-15 11:24:16.910826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:34.125 [2024-11-15 11:24:16.911091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.125 [2024-11-15 11:24:16.911170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:34.125 [2024-11-15 11:24:16.911219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.125 [2024-11-15 11:24:16.914217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.125 [2024-11-15 11:24:16.914265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:34.125 BaseBdev1 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.125 BaseBdev2_malloc 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.125 true 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.125 [2024-11-15 11:24:16.976112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:34.125 [2024-11-15 11:24:16.976220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.125 [2024-11-15 11:24:16.976249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:34.125 [2024-11-15 11:24:16.976265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.125 [2024-11-15 11:24:16.979155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.125 [2024-11-15 11:24:16.979248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:34.125 BaseBdev2 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.125 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.125 BaseBdev3_malloc 00:12:34.125 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.125 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:34.125 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.125 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.125 true 00:12:34.125 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.126 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:34.126 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.126 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.126 [2024-11-15 11:24:17.040819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:34.126 [2024-11-15 11:24:17.040905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.126 [2024-11-15 11:24:17.040932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:34.126 [2024-11-15 11:24:17.040949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.126 [2024-11-15 11:24:17.043897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.126 [2024-11-15 11:24:17.043962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:34.126 BaseBdev3 00:12:34.126 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.126 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.126 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:34.126 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.126 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.384 BaseBdev4_malloc 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.384 true 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.384 [2024-11-15 11:24:17.098446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:34.384 [2024-11-15 11:24:17.098529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.384 [2024-11-15 11:24:17.098557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:34.384 [2024-11-15 11:24:17.098573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.384 [2024-11-15 11:24:17.101379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.384 [2024-11-15 11:24:17.101428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:34.384 BaseBdev4 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.384 [2024-11-15 11:24:17.106509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.384 [2024-11-15 11:24:17.108996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.384 [2024-11-15 11:24:17.109338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.384 [2024-11-15 11:24:17.109525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.384 [2024-11-15 11:24:17.109892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:34.384 [2024-11-15 11:24:17.109916] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:34.384 [2024-11-15 11:24:17.110293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:34.384 [2024-11-15 11:24:17.110590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:34.384 [2024-11-15 11:24:17.110619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:34.384 [2024-11-15 11:24:17.110873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.384 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.385 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.385 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.385 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.385 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.385 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.385 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.385 "name": "raid_bdev1", 00:12:34.385 "uuid": "f6953604-3ad4-4924-89d3-54d5d609eff8", 00:12:34.385 "strip_size_kb": 64, 00:12:34.385 "state": "online", 00:12:34.385 "raid_level": "concat", 00:12:34.385 "superblock": true, 00:12:34.385 "num_base_bdevs": 4, 00:12:34.385 "num_base_bdevs_discovered": 4, 00:12:34.385 "num_base_bdevs_operational": 4, 00:12:34.385 "base_bdevs_list": [ 00:12:34.385 { 00:12:34.385 "name": "BaseBdev1", 00:12:34.385 "uuid": "b60f9128-d3d3-5083-adcc-a045d86586a2", 00:12:34.385 "is_configured": true, 00:12:34.385 "data_offset": 2048, 00:12:34.385 "data_size": 63488 00:12:34.385 }, 00:12:34.385 { 00:12:34.385 "name": "BaseBdev2", 00:12:34.385 "uuid": "807e9dfc-b511-5322-9059-8f444e7d5610", 00:12:34.385 "is_configured": true, 00:12:34.385 "data_offset": 2048, 00:12:34.385 "data_size": 63488 00:12:34.385 }, 00:12:34.385 { 00:12:34.385 "name": "BaseBdev3", 00:12:34.385 "uuid": "c5a264ad-fac3-5837-b69e-2f851b8aed4d", 00:12:34.385 "is_configured": true, 00:12:34.385 "data_offset": 2048, 00:12:34.385 "data_size": 63488 00:12:34.385 }, 00:12:34.385 { 00:12:34.385 "name": "BaseBdev4", 00:12:34.385 "uuid": "2f82b827-00f0-5b79-96c6-dec0350415d2", 00:12:34.385 "is_configured": true, 00:12:34.385 "data_offset": 2048, 00:12:34.385 "data_size": 63488 00:12:34.385 } 00:12:34.385 ] 00:12:34.385 }' 00:12:34.385 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.385 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.951 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:34.951 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:34.951 [2024-11-15 11:24:17.732333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.888 "name": "raid_bdev1", 00:12:35.888 "uuid": "f6953604-3ad4-4924-89d3-54d5d609eff8", 00:12:35.888 "strip_size_kb": 64, 00:12:35.888 "state": "online", 00:12:35.888 "raid_level": "concat", 00:12:35.888 "superblock": true, 00:12:35.888 "num_base_bdevs": 4, 00:12:35.888 "num_base_bdevs_discovered": 4, 00:12:35.888 "num_base_bdevs_operational": 4, 00:12:35.888 "base_bdevs_list": [ 00:12:35.888 { 00:12:35.888 "name": "BaseBdev1", 00:12:35.888 "uuid": "b60f9128-d3d3-5083-adcc-a045d86586a2", 00:12:35.888 "is_configured": true, 00:12:35.888 "data_offset": 2048, 00:12:35.888 "data_size": 63488 00:12:35.888 }, 00:12:35.888 { 00:12:35.888 "name": "BaseBdev2", 00:12:35.888 "uuid": "807e9dfc-b511-5322-9059-8f444e7d5610", 00:12:35.888 "is_configured": true, 00:12:35.888 "data_offset": 2048, 00:12:35.888 "data_size": 63488 00:12:35.888 }, 00:12:35.888 { 00:12:35.888 "name": "BaseBdev3", 00:12:35.888 "uuid": "c5a264ad-fac3-5837-b69e-2f851b8aed4d", 00:12:35.888 "is_configured": true, 00:12:35.888 "data_offset": 2048, 00:12:35.888 "data_size": 63488 00:12:35.888 }, 00:12:35.888 { 00:12:35.888 "name": "BaseBdev4", 00:12:35.888 "uuid": "2f82b827-00f0-5b79-96c6-dec0350415d2", 00:12:35.888 "is_configured": true, 00:12:35.888 "data_offset": 2048, 00:12:35.888 "data_size": 63488 00:12:35.888 } 00:12:35.888 ] 00:12:35.888 }' 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.888 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.456 [2024-11-15 11:24:19.108121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.456 [2024-11-15 11:24:19.108179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.456 [2024-11-15 11:24:19.111674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.456 [2024-11-15 11:24:19.111750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.456 [2024-11-15 11:24:19.111810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.456 [2024-11-15 11:24:19.111831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:36.456 { 00:12:36.456 "results": [ 00:12:36.456 { 00:12:36.456 "job": "raid_bdev1", 00:12:36.456 "core_mask": "0x1", 00:12:36.456 "workload": "randrw", 00:12:36.456 "percentage": 50, 00:12:36.456 "status": "finished", 00:12:36.456 "queue_depth": 1, 00:12:36.456 "io_size": 131072, 00:12:36.456 "runtime": 1.373228, 00:12:36.456 "iops": 9889.836210738493, 00:12:36.456 "mibps": 1236.2295263423116, 00:12:36.456 "io_failed": 1, 00:12:36.456 "io_timeout": 0, 00:12:36.456 "avg_latency_us": 141.02395416393355, 00:12:36.456 "min_latency_us": 37.70181818181818, 00:12:36.456 "max_latency_us": 1966.08 00:12:36.456 } 00:12:36.456 ], 00:12:36.456 "core_count": 1 00:12:36.456 } 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72900 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 72900 ']' 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 72900 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72900 00:12:36.456 killing process with pid 72900 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72900' 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 72900 00:12:36.456 [2024-11-15 11:24:19.149200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.456 11:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 72900 00:12:36.714 [2024-11-15 11:24:19.425462] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mEZ6vdsUkU 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:37.651 ************************************ 00:12:37.651 END TEST raid_read_error_test 00:12:37.651 ************************************ 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:37.651 00:12:37.651 real 0m4.815s 00:12:37.651 user 0m5.814s 00:12:37.651 sys 0m0.682s 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:37.651 11:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.910 11:24:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:37.910 11:24:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:37.910 11:24:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:37.910 11:24:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.910 ************************************ 00:12:37.910 START TEST raid_write_error_test 00:12:37.910 ************************************ 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:37.910 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GHI00DVOdl 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73051 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73051 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73051 ']' 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:37.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:37.911 11:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.911 [2024-11-15 11:24:20.737359] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:12:37.911 [2024-11-15 11:24:20.738418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73051 ] 00:12:38.169 [2024-11-15 11:24:20.929862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.169 [2024-11-15 11:24:21.058080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.427 [2024-11-15 11:24:21.278902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.427 [2024-11-15 11:24:21.279129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.994 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:38.994 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:38.994 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.994 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.994 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.994 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.994 BaseBdev1_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 true 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 [2024-11-15 11:24:21.790379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.995 [2024-11-15 11:24:21.790682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.995 [2024-11-15 11:24:21.790753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:38.995 [2024-11-15 11:24:21.790773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.995 [2024-11-15 11:24:21.794020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.995 [2024-11-15 11:24:21.794300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.995 BaseBdev1 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 BaseBdev2_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 true 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 [2024-11-15 11:24:21.860515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:38.995 [2024-11-15 11:24:21.860610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.995 [2024-11-15 11:24:21.860635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.995 [2024-11-15 11:24:21.860651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.995 [2024-11-15 11:24:21.863649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.995 [2024-11-15 11:24:21.863711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.995 BaseBdev2 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 BaseBdev3_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 true 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 [2024-11-15 11:24:21.930716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:38.995 [2024-11-15 11:24:21.930810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.995 [2024-11-15 11:24:21.930837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:38.995 [2024-11-15 11:24:21.930854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.995 [2024-11-15 11:24:21.933798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.995 [2024-11-15 11:24:21.933862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:38.995 BaseBdev3 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.995 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.254 BaseBdev4_malloc 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.254 true 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.254 [2024-11-15 11:24:21.992667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:39.254 [2024-11-15 11:24:21.992763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.254 [2024-11-15 11:24:21.992790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:39.254 [2024-11-15 11:24:21.992807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.254 [2024-11-15 11:24:21.995833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.254 [2024-11-15 11:24:21.995898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:39.254 BaseBdev4 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.254 11:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.254 [2024-11-15 11:24:22.000816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.254 [2024-11-15 11:24:22.003752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.254 [2024-11-15 11:24:22.004018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.254 [2024-11-15 11:24:22.004166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:39.254 [2024-11-15 11:24:22.004616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:39.254 [2024-11-15 11:24:22.004755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:39.254 [2024-11-15 11:24:22.005105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:39.254 [2024-11-15 11:24:22.005507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:39.254 [2024-11-15 11:24:22.005650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:39.254 [2024-11-15 11:24:22.006144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.254 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.254 "name": "raid_bdev1", 00:12:39.254 "uuid": "00a8e8eb-13db-4bf7-8fb2-aecd04f73d48", 00:12:39.254 "strip_size_kb": 64, 00:12:39.254 "state": "online", 00:12:39.255 "raid_level": "concat", 00:12:39.255 "superblock": true, 00:12:39.255 "num_base_bdevs": 4, 00:12:39.255 "num_base_bdevs_discovered": 4, 00:12:39.255 "num_base_bdevs_operational": 4, 00:12:39.255 "base_bdevs_list": [ 00:12:39.255 { 00:12:39.255 "name": "BaseBdev1", 00:12:39.255 "uuid": "bc939b4b-c829-5beb-9aaa-4ab699a9a0f8", 00:12:39.255 "is_configured": true, 00:12:39.255 "data_offset": 2048, 00:12:39.255 "data_size": 63488 00:12:39.255 }, 00:12:39.255 { 00:12:39.255 "name": "BaseBdev2", 00:12:39.255 "uuid": "fe01f73b-8b59-5bcf-9653-787c6b2e33c7", 00:12:39.255 "is_configured": true, 00:12:39.255 "data_offset": 2048, 00:12:39.255 "data_size": 63488 00:12:39.255 }, 00:12:39.255 { 00:12:39.255 "name": "BaseBdev3", 00:12:39.255 "uuid": "5c014d9e-865b-5fa4-a711-837b2b69df23", 00:12:39.255 "is_configured": true, 00:12:39.255 "data_offset": 2048, 00:12:39.255 "data_size": 63488 00:12:39.255 }, 00:12:39.255 { 00:12:39.255 "name": "BaseBdev4", 00:12:39.255 "uuid": "a931891e-7703-5081-aaf2-722748c0b5d0", 00:12:39.255 "is_configured": true, 00:12:39.255 "data_offset": 2048, 00:12:39.255 "data_size": 63488 00:12:39.255 } 00:12:39.255 ] 00:12:39.255 }' 00:12:39.255 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.255 11:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.822 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:39.822 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.822 [2024-11-15 11:24:22.607740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.762 "name": "raid_bdev1", 00:12:40.762 "uuid": "00a8e8eb-13db-4bf7-8fb2-aecd04f73d48", 00:12:40.762 "strip_size_kb": 64, 00:12:40.762 "state": "online", 00:12:40.762 "raid_level": "concat", 00:12:40.762 "superblock": true, 00:12:40.762 "num_base_bdevs": 4, 00:12:40.762 "num_base_bdevs_discovered": 4, 00:12:40.762 "num_base_bdevs_operational": 4, 00:12:40.762 "base_bdevs_list": [ 00:12:40.762 { 00:12:40.762 "name": "BaseBdev1", 00:12:40.762 "uuid": "bc939b4b-c829-5beb-9aaa-4ab699a9a0f8", 00:12:40.762 "is_configured": true, 00:12:40.762 "data_offset": 2048, 00:12:40.762 "data_size": 63488 00:12:40.762 }, 00:12:40.762 { 00:12:40.762 "name": "BaseBdev2", 00:12:40.762 "uuid": "fe01f73b-8b59-5bcf-9653-787c6b2e33c7", 00:12:40.762 "is_configured": true, 00:12:40.762 "data_offset": 2048, 00:12:40.762 "data_size": 63488 00:12:40.762 }, 00:12:40.762 { 00:12:40.762 "name": "BaseBdev3", 00:12:40.762 "uuid": "5c014d9e-865b-5fa4-a711-837b2b69df23", 00:12:40.762 "is_configured": true, 00:12:40.762 "data_offset": 2048, 00:12:40.762 "data_size": 63488 00:12:40.762 }, 00:12:40.762 { 00:12:40.762 "name": "BaseBdev4", 00:12:40.762 "uuid": "a931891e-7703-5081-aaf2-722748c0b5d0", 00:12:40.762 "is_configured": true, 00:12:40.762 "data_offset": 2048, 00:12:40.762 "data_size": 63488 00:12:40.762 } 00:12:40.762 ] 00:12:40.762 }' 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.762 11:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.339 [2024-11-15 11:24:24.087908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.339 [2024-11-15 11:24:24.088104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.339 [2024-11-15 11:24:24.091758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.339 [2024-11-15 11:24:24.091998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.339 [2024-11-15 11:24:24.092104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:12:41.339 "results": [ 00:12:41.339 { 00:12:41.339 "job": "raid_bdev1", 00:12:41.339 "core_mask": "0x1", 00:12:41.339 "workload": "randrw", 00:12:41.339 "percentage": 50, 00:12:41.339 "status": "finished", 00:12:41.339 "queue_depth": 1, 00:12:41.339 "io_size": 131072, 00:12:41.339 "runtime": 1.478011, 00:12:41.339 "iops": 9968.802667909778, 00:12:41.339 "mibps": 1246.1003334887223, 00:12:41.339 "io_failed": 1, 00:12:41.339 "io_timeout": 0, 00:12:41.339 "avg_latency_us": 140.24349199494094, 00:12:41.339 "min_latency_us": 37.93454545454546, 00:12:41.339 "max_latency_us": 1980.9745454545455 00:12:41.339 } 00:12:41.339 ], 00:12:41.339 "core_count": 1 00:12:41.339 } 00:12:41.339 ee all in destruct 00:12:41.339 [2024-11-15 11:24:24.092359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73051 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73051 ']' 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73051 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73051 00:12:41.339 killing process with pid 73051 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73051' 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73051 00:12:41.339 [2024-11-15 11:24:24.129679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.339 11:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73051 00:12:41.598 [2024-11-15 11:24:24.406821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GHI00DVOdl 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:12:42.974 00:12:42.974 real 0m4.920s 00:12:42.974 user 0m5.975s 00:12:42.974 sys 0m0.697s 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.974 11:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.974 ************************************ 00:12:42.974 END TEST raid_write_error_test 00:12:42.974 ************************************ 00:12:42.974 11:24:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:42.974 11:24:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:42.974 11:24:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:42.974 11:24:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:42.974 11:24:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.974 ************************************ 00:12:42.974 START TEST raid_state_function_test 00:12:42.974 ************************************ 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:42.974 Process raid pid: 73195 00:12:42.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.974 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73195 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73195' 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73195 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73195 ']' 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:42.975 11:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.975 [2024-11-15 11:24:25.706593] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:12:42.975 [2024-11-15 11:24:25.707055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.975 [2024-11-15 11:24:25.896154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.233 [2024-11-15 11:24:26.042207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.491 [2024-11-15 11:24:26.267774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.491 [2024-11-15 11:24:26.267998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.749 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:43.749 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:43.749 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.749 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.749 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.009 [2024-11-15 11:24:26.701384] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.009 [2024-11-15 11:24:26.701472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.009 [2024-11-15 11:24:26.701491] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.009 [2024-11-15 11:24:26.701509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.009 [2024-11-15 11:24:26.701535] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.009 [2024-11-15 11:24:26.701565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.009 [2024-11-15 11:24:26.701574] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.009 [2024-11-15 11:24:26.701603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.009 "name": "Existed_Raid", 00:12:44.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.009 "strip_size_kb": 0, 00:12:44.009 "state": "configuring", 00:12:44.009 "raid_level": "raid1", 00:12:44.009 "superblock": false, 00:12:44.009 "num_base_bdevs": 4, 00:12:44.009 "num_base_bdevs_discovered": 0, 00:12:44.009 "num_base_bdevs_operational": 4, 00:12:44.009 "base_bdevs_list": [ 00:12:44.009 { 00:12:44.009 "name": "BaseBdev1", 00:12:44.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.009 "is_configured": false, 00:12:44.009 "data_offset": 0, 00:12:44.009 "data_size": 0 00:12:44.009 }, 00:12:44.009 { 00:12:44.009 "name": "BaseBdev2", 00:12:44.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.009 "is_configured": false, 00:12:44.009 "data_offset": 0, 00:12:44.009 "data_size": 0 00:12:44.009 }, 00:12:44.009 { 00:12:44.009 "name": "BaseBdev3", 00:12:44.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.009 "is_configured": false, 00:12:44.009 "data_offset": 0, 00:12:44.009 "data_size": 0 00:12:44.009 }, 00:12:44.009 { 00:12:44.009 "name": "BaseBdev4", 00:12:44.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.009 "is_configured": false, 00:12:44.009 "data_offset": 0, 00:12:44.009 "data_size": 0 00:12:44.009 } 00:12:44.009 ] 00:12:44.009 }' 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.009 11:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.576 [2024-11-15 11:24:27.225480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.576 [2024-11-15 11:24:27.225526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.576 [2024-11-15 11:24:27.233467] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.576 [2024-11-15 11:24:27.233565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.576 [2024-11-15 11:24:27.233595] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.576 [2024-11-15 11:24:27.233611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.576 [2024-11-15 11:24:27.233620] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.576 [2024-11-15 11:24:27.233633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.576 [2024-11-15 11:24:27.233642] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.576 [2024-11-15 11:24:27.233656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:44.576 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.577 [2024-11-15 11:24:27.284272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.577 BaseBdev1 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.577 [ 00:12:44.577 { 00:12:44.577 "name": "BaseBdev1", 00:12:44.577 "aliases": [ 00:12:44.577 "79cfdec0-cf18-408a-b301-17ba3a493718" 00:12:44.577 ], 00:12:44.577 "product_name": "Malloc disk", 00:12:44.577 "block_size": 512, 00:12:44.577 "num_blocks": 65536, 00:12:44.577 "uuid": "79cfdec0-cf18-408a-b301-17ba3a493718", 00:12:44.577 "assigned_rate_limits": { 00:12:44.577 "rw_ios_per_sec": 0, 00:12:44.577 "rw_mbytes_per_sec": 0, 00:12:44.577 "r_mbytes_per_sec": 0, 00:12:44.577 "w_mbytes_per_sec": 0 00:12:44.577 }, 00:12:44.577 "claimed": true, 00:12:44.577 "claim_type": "exclusive_write", 00:12:44.577 "zoned": false, 00:12:44.577 "supported_io_types": { 00:12:44.577 "read": true, 00:12:44.577 "write": true, 00:12:44.577 "unmap": true, 00:12:44.577 "flush": true, 00:12:44.577 "reset": true, 00:12:44.577 "nvme_admin": false, 00:12:44.577 "nvme_io": false, 00:12:44.577 "nvme_io_md": false, 00:12:44.577 "write_zeroes": true, 00:12:44.577 "zcopy": true, 00:12:44.577 "get_zone_info": false, 00:12:44.577 "zone_management": false, 00:12:44.577 "zone_append": false, 00:12:44.577 "compare": false, 00:12:44.577 "compare_and_write": false, 00:12:44.577 "abort": true, 00:12:44.577 "seek_hole": false, 00:12:44.577 "seek_data": false, 00:12:44.577 "copy": true, 00:12:44.577 "nvme_iov_md": false 00:12:44.577 }, 00:12:44.577 "memory_domains": [ 00:12:44.577 { 00:12:44.577 "dma_device_id": "system", 00:12:44.577 "dma_device_type": 1 00:12:44.577 }, 00:12:44.577 { 00:12:44.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.577 "dma_device_type": 2 00:12:44.577 } 00:12:44.577 ], 00:12:44.577 "driver_specific": {} 00:12:44.577 } 00:12:44.577 ] 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.577 "name": "Existed_Raid", 00:12:44.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.577 "strip_size_kb": 0, 00:12:44.577 "state": "configuring", 00:12:44.577 "raid_level": "raid1", 00:12:44.577 "superblock": false, 00:12:44.577 "num_base_bdevs": 4, 00:12:44.577 "num_base_bdevs_discovered": 1, 00:12:44.577 "num_base_bdevs_operational": 4, 00:12:44.577 "base_bdevs_list": [ 00:12:44.577 { 00:12:44.577 "name": "BaseBdev1", 00:12:44.577 "uuid": "79cfdec0-cf18-408a-b301-17ba3a493718", 00:12:44.577 "is_configured": true, 00:12:44.577 "data_offset": 0, 00:12:44.577 "data_size": 65536 00:12:44.577 }, 00:12:44.577 { 00:12:44.577 "name": "BaseBdev2", 00:12:44.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.577 "is_configured": false, 00:12:44.577 "data_offset": 0, 00:12:44.577 "data_size": 0 00:12:44.577 }, 00:12:44.577 { 00:12:44.577 "name": "BaseBdev3", 00:12:44.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.577 "is_configured": false, 00:12:44.577 "data_offset": 0, 00:12:44.577 "data_size": 0 00:12:44.577 }, 00:12:44.577 { 00:12:44.577 "name": "BaseBdev4", 00:12:44.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.577 "is_configured": false, 00:12:44.577 "data_offset": 0, 00:12:44.577 "data_size": 0 00:12:44.577 } 00:12:44.577 ] 00:12:44.577 }' 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.577 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.145 [2024-11-15 11:24:27.824562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.145 [2024-11-15 11:24:27.824672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.145 [2024-11-15 11:24:27.832621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.145 [2024-11-15 11:24:27.835372] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.145 [2024-11-15 11:24:27.835429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.145 [2024-11-15 11:24:27.835447] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.145 [2024-11-15 11:24:27.835465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.145 [2024-11-15 11:24:27.835475] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:45.145 [2024-11-15 11:24:27.835500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.145 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.145 "name": "Existed_Raid", 00:12:45.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.145 "strip_size_kb": 0, 00:12:45.145 "state": "configuring", 00:12:45.145 "raid_level": "raid1", 00:12:45.145 "superblock": false, 00:12:45.145 "num_base_bdevs": 4, 00:12:45.145 "num_base_bdevs_discovered": 1, 00:12:45.145 "num_base_bdevs_operational": 4, 00:12:45.145 "base_bdevs_list": [ 00:12:45.145 { 00:12:45.145 "name": "BaseBdev1", 00:12:45.145 "uuid": "79cfdec0-cf18-408a-b301-17ba3a493718", 00:12:45.145 "is_configured": true, 00:12:45.145 "data_offset": 0, 00:12:45.145 "data_size": 65536 00:12:45.145 }, 00:12:45.145 { 00:12:45.145 "name": "BaseBdev2", 00:12:45.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.145 "is_configured": false, 00:12:45.146 "data_offset": 0, 00:12:45.146 "data_size": 0 00:12:45.146 }, 00:12:45.146 { 00:12:45.146 "name": "BaseBdev3", 00:12:45.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.146 "is_configured": false, 00:12:45.146 "data_offset": 0, 00:12:45.146 "data_size": 0 00:12:45.146 }, 00:12:45.146 { 00:12:45.146 "name": "BaseBdev4", 00:12:45.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.146 "is_configured": false, 00:12:45.146 "data_offset": 0, 00:12:45.146 "data_size": 0 00:12:45.146 } 00:12:45.146 ] 00:12:45.146 }' 00:12:45.146 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.146 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.714 [2024-11-15 11:24:28.399043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.714 BaseBdev2 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.714 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.714 [ 00:12:45.714 { 00:12:45.714 "name": "BaseBdev2", 00:12:45.714 "aliases": [ 00:12:45.714 "2b2828c2-5f71-4a15-b8bc-95a24ea76529" 00:12:45.714 ], 00:12:45.714 "product_name": "Malloc disk", 00:12:45.714 "block_size": 512, 00:12:45.714 "num_blocks": 65536, 00:12:45.714 "uuid": "2b2828c2-5f71-4a15-b8bc-95a24ea76529", 00:12:45.714 "assigned_rate_limits": { 00:12:45.714 "rw_ios_per_sec": 0, 00:12:45.714 "rw_mbytes_per_sec": 0, 00:12:45.714 "r_mbytes_per_sec": 0, 00:12:45.714 "w_mbytes_per_sec": 0 00:12:45.714 }, 00:12:45.714 "claimed": true, 00:12:45.714 "claim_type": "exclusive_write", 00:12:45.714 "zoned": false, 00:12:45.714 "supported_io_types": { 00:12:45.714 "read": true, 00:12:45.714 "write": true, 00:12:45.714 "unmap": true, 00:12:45.714 "flush": true, 00:12:45.714 "reset": true, 00:12:45.714 "nvme_admin": false, 00:12:45.714 "nvme_io": false, 00:12:45.714 "nvme_io_md": false, 00:12:45.714 "write_zeroes": true, 00:12:45.714 "zcopy": true, 00:12:45.714 "get_zone_info": false, 00:12:45.714 "zone_management": false, 00:12:45.714 "zone_append": false, 00:12:45.714 "compare": false, 00:12:45.714 "compare_and_write": false, 00:12:45.714 "abort": true, 00:12:45.714 "seek_hole": false, 00:12:45.714 "seek_data": false, 00:12:45.714 "copy": true, 00:12:45.714 "nvme_iov_md": false 00:12:45.714 }, 00:12:45.714 "memory_domains": [ 00:12:45.714 { 00:12:45.714 "dma_device_id": "system", 00:12:45.714 "dma_device_type": 1 00:12:45.714 }, 00:12:45.714 { 00:12:45.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.715 "dma_device_type": 2 00:12:45.715 } 00:12:45.715 ], 00:12:45.715 "driver_specific": {} 00:12:45.715 } 00:12:45.715 ] 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.715 "name": "Existed_Raid", 00:12:45.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.715 "strip_size_kb": 0, 00:12:45.715 "state": "configuring", 00:12:45.715 "raid_level": "raid1", 00:12:45.715 "superblock": false, 00:12:45.715 "num_base_bdevs": 4, 00:12:45.715 "num_base_bdevs_discovered": 2, 00:12:45.715 "num_base_bdevs_operational": 4, 00:12:45.715 "base_bdevs_list": [ 00:12:45.715 { 00:12:45.715 "name": "BaseBdev1", 00:12:45.715 "uuid": "79cfdec0-cf18-408a-b301-17ba3a493718", 00:12:45.715 "is_configured": true, 00:12:45.715 "data_offset": 0, 00:12:45.715 "data_size": 65536 00:12:45.715 }, 00:12:45.715 { 00:12:45.715 "name": "BaseBdev2", 00:12:45.715 "uuid": "2b2828c2-5f71-4a15-b8bc-95a24ea76529", 00:12:45.715 "is_configured": true, 00:12:45.715 "data_offset": 0, 00:12:45.715 "data_size": 65536 00:12:45.715 }, 00:12:45.715 { 00:12:45.715 "name": "BaseBdev3", 00:12:45.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.715 "is_configured": false, 00:12:45.715 "data_offset": 0, 00:12:45.715 "data_size": 0 00:12:45.715 }, 00:12:45.715 { 00:12:45.715 "name": "BaseBdev4", 00:12:45.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.715 "is_configured": false, 00:12:45.715 "data_offset": 0, 00:12:45.715 "data_size": 0 00:12:45.715 } 00:12:45.715 ] 00:12:45.715 }' 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.715 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.282 [2024-11-15 11:24:28.986542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.282 BaseBdev3 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.282 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.282 [ 00:12:46.282 { 00:12:46.282 "name": "BaseBdev3", 00:12:46.282 "aliases": [ 00:12:46.282 "ac38010a-8344-4f12-b062-26ae1260898d" 00:12:46.282 ], 00:12:46.282 "product_name": "Malloc disk", 00:12:46.282 "block_size": 512, 00:12:46.282 "num_blocks": 65536, 00:12:46.282 "uuid": "ac38010a-8344-4f12-b062-26ae1260898d", 00:12:46.282 "assigned_rate_limits": { 00:12:46.282 "rw_ios_per_sec": 0, 00:12:46.282 "rw_mbytes_per_sec": 0, 00:12:46.282 "r_mbytes_per_sec": 0, 00:12:46.282 "w_mbytes_per_sec": 0 00:12:46.282 }, 00:12:46.282 "claimed": true, 00:12:46.282 "claim_type": "exclusive_write", 00:12:46.282 "zoned": false, 00:12:46.282 "supported_io_types": { 00:12:46.282 "read": true, 00:12:46.282 "write": true, 00:12:46.282 "unmap": true, 00:12:46.282 "flush": true, 00:12:46.282 "reset": true, 00:12:46.282 "nvme_admin": false, 00:12:46.282 "nvme_io": false, 00:12:46.282 "nvme_io_md": false, 00:12:46.282 "write_zeroes": true, 00:12:46.282 "zcopy": true, 00:12:46.282 "get_zone_info": false, 00:12:46.282 "zone_management": false, 00:12:46.282 "zone_append": false, 00:12:46.282 "compare": false, 00:12:46.282 "compare_and_write": false, 00:12:46.282 "abort": true, 00:12:46.282 "seek_hole": false, 00:12:46.282 "seek_data": false, 00:12:46.282 "copy": true, 00:12:46.282 "nvme_iov_md": false 00:12:46.282 }, 00:12:46.282 "memory_domains": [ 00:12:46.282 { 00:12:46.282 "dma_device_id": "system", 00:12:46.282 "dma_device_type": 1 00:12:46.282 }, 00:12:46.282 { 00:12:46.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.282 "dma_device_type": 2 00:12:46.282 } 00:12:46.282 ], 00:12:46.282 "driver_specific": {} 00:12:46.282 } 00:12:46.282 ] 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.282 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.283 "name": "Existed_Raid", 00:12:46.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.283 "strip_size_kb": 0, 00:12:46.283 "state": "configuring", 00:12:46.283 "raid_level": "raid1", 00:12:46.283 "superblock": false, 00:12:46.283 "num_base_bdevs": 4, 00:12:46.283 "num_base_bdevs_discovered": 3, 00:12:46.283 "num_base_bdevs_operational": 4, 00:12:46.283 "base_bdevs_list": [ 00:12:46.283 { 00:12:46.283 "name": "BaseBdev1", 00:12:46.283 "uuid": "79cfdec0-cf18-408a-b301-17ba3a493718", 00:12:46.283 "is_configured": true, 00:12:46.283 "data_offset": 0, 00:12:46.283 "data_size": 65536 00:12:46.283 }, 00:12:46.283 { 00:12:46.283 "name": "BaseBdev2", 00:12:46.283 "uuid": "2b2828c2-5f71-4a15-b8bc-95a24ea76529", 00:12:46.283 "is_configured": true, 00:12:46.283 "data_offset": 0, 00:12:46.283 "data_size": 65536 00:12:46.283 }, 00:12:46.283 { 00:12:46.283 "name": "BaseBdev3", 00:12:46.283 "uuid": "ac38010a-8344-4f12-b062-26ae1260898d", 00:12:46.283 "is_configured": true, 00:12:46.283 "data_offset": 0, 00:12:46.283 "data_size": 65536 00:12:46.283 }, 00:12:46.283 { 00:12:46.283 "name": "BaseBdev4", 00:12:46.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.283 "is_configured": false, 00:12:46.283 "data_offset": 0, 00:12:46.283 "data_size": 0 00:12:46.283 } 00:12:46.283 ] 00:12:46.283 }' 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.283 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.850 [2024-11-15 11:24:29.594390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.850 [2024-11-15 11:24:29.594711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:46.850 [2024-11-15 11:24:29.594736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:46.850 [2024-11-15 11:24:29.595145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:46.850 [2024-11-15 11:24:29.595473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:46.850 [2024-11-15 11:24:29.595513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:46.850 [2024-11-15 11:24:29.595848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.850 BaseBdev4 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.850 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.850 [ 00:12:46.850 { 00:12:46.850 "name": "BaseBdev4", 00:12:46.850 "aliases": [ 00:12:46.850 "1d130709-526e-4ad2-8d24-c12bccf4fbfb" 00:12:46.850 ], 00:12:46.850 "product_name": "Malloc disk", 00:12:46.850 "block_size": 512, 00:12:46.850 "num_blocks": 65536, 00:12:46.850 "uuid": "1d130709-526e-4ad2-8d24-c12bccf4fbfb", 00:12:46.850 "assigned_rate_limits": { 00:12:46.850 "rw_ios_per_sec": 0, 00:12:46.850 "rw_mbytes_per_sec": 0, 00:12:46.850 "r_mbytes_per_sec": 0, 00:12:46.850 "w_mbytes_per_sec": 0 00:12:46.851 }, 00:12:46.851 "claimed": true, 00:12:46.851 "claim_type": "exclusive_write", 00:12:46.851 "zoned": false, 00:12:46.851 "supported_io_types": { 00:12:46.851 "read": true, 00:12:46.851 "write": true, 00:12:46.851 "unmap": true, 00:12:46.851 "flush": true, 00:12:46.851 "reset": true, 00:12:46.851 "nvme_admin": false, 00:12:46.851 "nvme_io": false, 00:12:46.851 "nvme_io_md": false, 00:12:46.851 "write_zeroes": true, 00:12:46.851 "zcopy": true, 00:12:46.851 "get_zone_info": false, 00:12:46.851 "zone_management": false, 00:12:46.851 "zone_append": false, 00:12:46.851 "compare": false, 00:12:46.851 "compare_and_write": false, 00:12:46.851 "abort": true, 00:12:46.851 "seek_hole": false, 00:12:46.851 "seek_data": false, 00:12:46.851 "copy": true, 00:12:46.851 "nvme_iov_md": false 00:12:46.851 }, 00:12:46.851 "memory_domains": [ 00:12:46.851 { 00:12:46.851 "dma_device_id": "system", 00:12:46.851 "dma_device_type": 1 00:12:46.851 }, 00:12:46.851 { 00:12:46.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.851 "dma_device_type": 2 00:12:46.851 } 00:12:46.851 ], 00:12:46.851 "driver_specific": {} 00:12:46.851 } 00:12:46.851 ] 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.851 "name": "Existed_Raid", 00:12:46.851 "uuid": "0e433dad-1a26-4891-bafc-8a46a31492f5", 00:12:46.851 "strip_size_kb": 0, 00:12:46.851 "state": "online", 00:12:46.851 "raid_level": "raid1", 00:12:46.851 "superblock": false, 00:12:46.851 "num_base_bdevs": 4, 00:12:46.851 "num_base_bdevs_discovered": 4, 00:12:46.851 "num_base_bdevs_operational": 4, 00:12:46.851 "base_bdevs_list": [ 00:12:46.851 { 00:12:46.851 "name": "BaseBdev1", 00:12:46.851 "uuid": "79cfdec0-cf18-408a-b301-17ba3a493718", 00:12:46.851 "is_configured": true, 00:12:46.851 "data_offset": 0, 00:12:46.851 "data_size": 65536 00:12:46.851 }, 00:12:46.851 { 00:12:46.851 "name": "BaseBdev2", 00:12:46.851 "uuid": "2b2828c2-5f71-4a15-b8bc-95a24ea76529", 00:12:46.851 "is_configured": true, 00:12:46.851 "data_offset": 0, 00:12:46.851 "data_size": 65536 00:12:46.851 }, 00:12:46.851 { 00:12:46.851 "name": "BaseBdev3", 00:12:46.851 "uuid": "ac38010a-8344-4f12-b062-26ae1260898d", 00:12:46.851 "is_configured": true, 00:12:46.851 "data_offset": 0, 00:12:46.851 "data_size": 65536 00:12:46.851 }, 00:12:46.851 { 00:12:46.851 "name": "BaseBdev4", 00:12:46.851 "uuid": "1d130709-526e-4ad2-8d24-c12bccf4fbfb", 00:12:46.851 "is_configured": true, 00:12:46.851 "data_offset": 0, 00:12:46.851 "data_size": 65536 00:12:46.851 } 00:12:46.851 ] 00:12:46.851 }' 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.851 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.418 [2024-11-15 11:24:30.147151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:47.418 "name": "Existed_Raid", 00:12:47.418 "aliases": [ 00:12:47.418 "0e433dad-1a26-4891-bafc-8a46a31492f5" 00:12:47.418 ], 00:12:47.418 "product_name": "Raid Volume", 00:12:47.418 "block_size": 512, 00:12:47.418 "num_blocks": 65536, 00:12:47.418 "uuid": "0e433dad-1a26-4891-bafc-8a46a31492f5", 00:12:47.418 "assigned_rate_limits": { 00:12:47.418 "rw_ios_per_sec": 0, 00:12:47.418 "rw_mbytes_per_sec": 0, 00:12:47.418 "r_mbytes_per_sec": 0, 00:12:47.418 "w_mbytes_per_sec": 0 00:12:47.418 }, 00:12:47.418 "claimed": false, 00:12:47.418 "zoned": false, 00:12:47.418 "supported_io_types": { 00:12:47.418 "read": true, 00:12:47.418 "write": true, 00:12:47.418 "unmap": false, 00:12:47.418 "flush": false, 00:12:47.418 "reset": true, 00:12:47.418 "nvme_admin": false, 00:12:47.418 "nvme_io": false, 00:12:47.418 "nvme_io_md": false, 00:12:47.418 "write_zeroes": true, 00:12:47.418 "zcopy": false, 00:12:47.418 "get_zone_info": false, 00:12:47.418 "zone_management": false, 00:12:47.418 "zone_append": false, 00:12:47.418 "compare": false, 00:12:47.418 "compare_and_write": false, 00:12:47.418 "abort": false, 00:12:47.418 "seek_hole": false, 00:12:47.418 "seek_data": false, 00:12:47.418 "copy": false, 00:12:47.418 "nvme_iov_md": false 00:12:47.418 }, 00:12:47.418 "memory_domains": [ 00:12:47.418 { 00:12:47.418 "dma_device_id": "system", 00:12:47.418 "dma_device_type": 1 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.418 "dma_device_type": 2 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "dma_device_id": "system", 00:12:47.418 "dma_device_type": 1 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.418 "dma_device_type": 2 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "dma_device_id": "system", 00:12:47.418 "dma_device_type": 1 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.418 "dma_device_type": 2 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "dma_device_id": "system", 00:12:47.418 "dma_device_type": 1 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.418 "dma_device_type": 2 00:12:47.418 } 00:12:47.418 ], 00:12:47.418 "driver_specific": { 00:12:47.418 "raid": { 00:12:47.418 "uuid": "0e433dad-1a26-4891-bafc-8a46a31492f5", 00:12:47.418 "strip_size_kb": 0, 00:12:47.418 "state": "online", 00:12:47.418 "raid_level": "raid1", 00:12:47.418 "superblock": false, 00:12:47.418 "num_base_bdevs": 4, 00:12:47.418 "num_base_bdevs_discovered": 4, 00:12:47.418 "num_base_bdevs_operational": 4, 00:12:47.418 "base_bdevs_list": [ 00:12:47.418 { 00:12:47.418 "name": "BaseBdev1", 00:12:47.418 "uuid": "79cfdec0-cf18-408a-b301-17ba3a493718", 00:12:47.418 "is_configured": true, 00:12:47.418 "data_offset": 0, 00:12:47.418 "data_size": 65536 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "name": "BaseBdev2", 00:12:47.418 "uuid": "2b2828c2-5f71-4a15-b8bc-95a24ea76529", 00:12:47.418 "is_configured": true, 00:12:47.418 "data_offset": 0, 00:12:47.418 "data_size": 65536 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "name": "BaseBdev3", 00:12:47.418 "uuid": "ac38010a-8344-4f12-b062-26ae1260898d", 00:12:47.418 "is_configured": true, 00:12:47.418 "data_offset": 0, 00:12:47.418 "data_size": 65536 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "name": "BaseBdev4", 00:12:47.418 "uuid": "1d130709-526e-4ad2-8d24-c12bccf4fbfb", 00:12:47.418 "is_configured": true, 00:12:47.418 "data_offset": 0, 00:12:47.418 "data_size": 65536 00:12:47.418 } 00:12:47.418 ] 00:12:47.418 } 00:12:47.418 } 00:12:47.418 }' 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:47.418 BaseBdev2 00:12:47.418 BaseBdev3 00:12:47.418 BaseBdev4' 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.418 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.678 [2024-11-15 11:24:30.506874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.678 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.937 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.937 "name": "Existed_Raid", 00:12:47.937 "uuid": "0e433dad-1a26-4891-bafc-8a46a31492f5", 00:12:47.937 "strip_size_kb": 0, 00:12:47.937 "state": "online", 00:12:47.937 "raid_level": "raid1", 00:12:47.937 "superblock": false, 00:12:47.937 "num_base_bdevs": 4, 00:12:47.937 "num_base_bdevs_discovered": 3, 00:12:47.937 "num_base_bdevs_operational": 3, 00:12:47.937 "base_bdevs_list": [ 00:12:47.937 { 00:12:47.937 "name": null, 00:12:47.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.937 "is_configured": false, 00:12:47.937 "data_offset": 0, 00:12:47.937 "data_size": 65536 00:12:47.937 }, 00:12:47.937 { 00:12:47.937 "name": "BaseBdev2", 00:12:47.937 "uuid": "2b2828c2-5f71-4a15-b8bc-95a24ea76529", 00:12:47.937 "is_configured": true, 00:12:47.937 "data_offset": 0, 00:12:47.937 "data_size": 65536 00:12:47.937 }, 00:12:47.937 { 00:12:47.937 "name": "BaseBdev3", 00:12:47.937 "uuid": "ac38010a-8344-4f12-b062-26ae1260898d", 00:12:47.937 "is_configured": true, 00:12:47.937 "data_offset": 0, 00:12:47.937 "data_size": 65536 00:12:47.937 }, 00:12:47.937 { 00:12:47.937 "name": "BaseBdev4", 00:12:47.937 "uuid": "1d130709-526e-4ad2-8d24-c12bccf4fbfb", 00:12:47.937 "is_configured": true, 00:12:47.937 "data_offset": 0, 00:12:47.937 "data_size": 65536 00:12:47.937 } 00:12:47.937 ] 00:12:47.937 }' 00:12:47.937 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.937 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.195 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:48.195 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.195 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.196 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.196 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:48.196 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.196 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.454 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.454 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.454 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:48.454 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.454 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.454 [2024-11-15 11:24:31.172993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:48.454 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.455 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.455 [2024-11-15 11:24:31.320404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.714 [2024-11-15 11:24:31.469912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:48.714 [2024-11-15 11:24:31.470084] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.714 [2024-11-15 11:24:31.558729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.714 [2024-11-15 11:24:31.559076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.714 [2024-11-15 11:24:31.559114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.714 BaseBdev2 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.714 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.974 [ 00:12:48.974 { 00:12:48.974 "name": "BaseBdev2", 00:12:48.974 "aliases": [ 00:12:48.974 "5494a3b2-ddfe-485c-b06f-dc34e5a376df" 00:12:48.974 ], 00:12:48.974 "product_name": "Malloc disk", 00:12:48.974 "block_size": 512, 00:12:48.974 "num_blocks": 65536, 00:12:48.974 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:48.974 "assigned_rate_limits": { 00:12:48.974 "rw_ios_per_sec": 0, 00:12:48.974 "rw_mbytes_per_sec": 0, 00:12:48.974 "r_mbytes_per_sec": 0, 00:12:48.974 "w_mbytes_per_sec": 0 00:12:48.974 }, 00:12:48.974 "claimed": false, 00:12:48.974 "zoned": false, 00:12:48.974 "supported_io_types": { 00:12:48.974 "read": true, 00:12:48.974 "write": true, 00:12:48.974 "unmap": true, 00:12:48.974 "flush": true, 00:12:48.974 "reset": true, 00:12:48.974 "nvme_admin": false, 00:12:48.974 "nvme_io": false, 00:12:48.974 "nvme_io_md": false, 00:12:48.974 "write_zeroes": true, 00:12:48.974 "zcopy": true, 00:12:48.974 "get_zone_info": false, 00:12:48.974 "zone_management": false, 00:12:48.974 "zone_append": false, 00:12:48.974 "compare": false, 00:12:48.974 "compare_and_write": false, 00:12:48.974 "abort": true, 00:12:48.974 "seek_hole": false, 00:12:48.974 "seek_data": false, 00:12:48.974 "copy": true, 00:12:48.974 "nvme_iov_md": false 00:12:48.974 }, 00:12:48.974 "memory_domains": [ 00:12:48.974 { 00:12:48.974 "dma_device_id": "system", 00:12:48.974 "dma_device_type": 1 00:12:48.974 }, 00:12:48.974 { 00:12:48.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.974 "dma_device_type": 2 00:12:48.974 } 00:12:48.974 ], 00:12:48.974 "driver_specific": {} 00:12:48.974 } 00:12:48.974 ] 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.974 BaseBdev3 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.974 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.974 [ 00:12:48.974 { 00:12:48.974 "name": "BaseBdev3", 00:12:48.974 "aliases": [ 00:12:48.974 "e05c6885-f618-4dfb-99ed-9a212119ee02" 00:12:48.974 ], 00:12:48.974 "product_name": "Malloc disk", 00:12:48.974 "block_size": 512, 00:12:48.974 "num_blocks": 65536, 00:12:48.974 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:48.974 "assigned_rate_limits": { 00:12:48.974 "rw_ios_per_sec": 0, 00:12:48.974 "rw_mbytes_per_sec": 0, 00:12:48.974 "r_mbytes_per_sec": 0, 00:12:48.974 "w_mbytes_per_sec": 0 00:12:48.974 }, 00:12:48.974 "claimed": false, 00:12:48.974 "zoned": false, 00:12:48.974 "supported_io_types": { 00:12:48.974 "read": true, 00:12:48.974 "write": true, 00:12:48.974 "unmap": true, 00:12:48.974 "flush": true, 00:12:48.974 "reset": true, 00:12:48.974 "nvme_admin": false, 00:12:48.974 "nvme_io": false, 00:12:48.974 "nvme_io_md": false, 00:12:48.974 "write_zeroes": true, 00:12:48.974 "zcopy": true, 00:12:48.974 "get_zone_info": false, 00:12:48.974 "zone_management": false, 00:12:48.975 "zone_append": false, 00:12:48.975 "compare": false, 00:12:48.975 "compare_and_write": false, 00:12:48.975 "abort": true, 00:12:48.975 "seek_hole": false, 00:12:48.975 "seek_data": false, 00:12:48.975 "copy": true, 00:12:48.975 "nvme_iov_md": false 00:12:48.975 }, 00:12:48.975 "memory_domains": [ 00:12:48.975 { 00:12:48.975 "dma_device_id": "system", 00:12:48.975 "dma_device_type": 1 00:12:48.975 }, 00:12:48.975 { 00:12:48.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.975 "dma_device_type": 2 00:12:48.975 } 00:12:48.975 ], 00:12:48.975 "driver_specific": {} 00:12:48.975 } 00:12:48.975 ] 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.975 BaseBdev4 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.975 [ 00:12:48.975 { 00:12:48.975 "name": "BaseBdev4", 00:12:48.975 "aliases": [ 00:12:48.975 "c862e07b-105b-4f31-9975-5db118c41019" 00:12:48.975 ], 00:12:48.975 "product_name": "Malloc disk", 00:12:48.975 "block_size": 512, 00:12:48.975 "num_blocks": 65536, 00:12:48.975 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:48.975 "assigned_rate_limits": { 00:12:48.975 "rw_ios_per_sec": 0, 00:12:48.975 "rw_mbytes_per_sec": 0, 00:12:48.975 "r_mbytes_per_sec": 0, 00:12:48.975 "w_mbytes_per_sec": 0 00:12:48.975 }, 00:12:48.975 "claimed": false, 00:12:48.975 "zoned": false, 00:12:48.975 "supported_io_types": { 00:12:48.975 "read": true, 00:12:48.975 "write": true, 00:12:48.975 "unmap": true, 00:12:48.975 "flush": true, 00:12:48.975 "reset": true, 00:12:48.975 "nvme_admin": false, 00:12:48.975 "nvme_io": false, 00:12:48.975 "nvme_io_md": false, 00:12:48.975 "write_zeroes": true, 00:12:48.975 "zcopy": true, 00:12:48.975 "get_zone_info": false, 00:12:48.975 "zone_management": false, 00:12:48.975 "zone_append": false, 00:12:48.975 "compare": false, 00:12:48.975 "compare_and_write": false, 00:12:48.975 "abort": true, 00:12:48.975 "seek_hole": false, 00:12:48.975 "seek_data": false, 00:12:48.975 "copy": true, 00:12:48.975 "nvme_iov_md": false 00:12:48.975 }, 00:12:48.975 "memory_domains": [ 00:12:48.975 { 00:12:48.975 "dma_device_id": "system", 00:12:48.975 "dma_device_type": 1 00:12:48.975 }, 00:12:48.975 { 00:12:48.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.975 "dma_device_type": 2 00:12:48.975 } 00:12:48.975 ], 00:12:48.975 "driver_specific": {} 00:12:48.975 } 00:12:48.975 ] 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.975 [2024-11-15 11:24:31.865969] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.975 [2024-11-15 11:24:31.866073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.975 [2024-11-15 11:24:31.866108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.975 [2024-11-15 11:24:31.868836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.975 [2024-11-15 11:24:31.868898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.975 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.234 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.234 "name": "Existed_Raid", 00:12:49.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.234 "strip_size_kb": 0, 00:12:49.234 "state": "configuring", 00:12:49.234 "raid_level": "raid1", 00:12:49.234 "superblock": false, 00:12:49.234 "num_base_bdevs": 4, 00:12:49.234 "num_base_bdevs_discovered": 3, 00:12:49.234 "num_base_bdevs_operational": 4, 00:12:49.234 "base_bdevs_list": [ 00:12:49.234 { 00:12:49.234 "name": "BaseBdev1", 00:12:49.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.234 "is_configured": false, 00:12:49.234 "data_offset": 0, 00:12:49.234 "data_size": 0 00:12:49.234 }, 00:12:49.234 { 00:12:49.234 "name": "BaseBdev2", 00:12:49.234 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:49.234 "is_configured": true, 00:12:49.234 "data_offset": 0, 00:12:49.234 "data_size": 65536 00:12:49.234 }, 00:12:49.234 { 00:12:49.234 "name": "BaseBdev3", 00:12:49.234 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:49.234 "is_configured": true, 00:12:49.234 "data_offset": 0, 00:12:49.234 "data_size": 65536 00:12:49.234 }, 00:12:49.234 { 00:12:49.234 "name": "BaseBdev4", 00:12:49.234 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:49.234 "is_configured": true, 00:12:49.234 "data_offset": 0, 00:12:49.234 "data_size": 65536 00:12:49.234 } 00:12:49.234 ] 00:12:49.234 }' 00:12:49.234 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.234 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 [2024-11-15 11:24:32.402214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.792 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.792 "name": "Existed_Raid", 00:12:49.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.792 "strip_size_kb": 0, 00:12:49.792 "state": "configuring", 00:12:49.792 "raid_level": "raid1", 00:12:49.792 "superblock": false, 00:12:49.792 "num_base_bdevs": 4, 00:12:49.792 "num_base_bdevs_discovered": 2, 00:12:49.792 "num_base_bdevs_operational": 4, 00:12:49.792 "base_bdevs_list": [ 00:12:49.792 { 00:12:49.792 "name": "BaseBdev1", 00:12:49.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.792 "is_configured": false, 00:12:49.792 "data_offset": 0, 00:12:49.792 "data_size": 0 00:12:49.792 }, 00:12:49.792 { 00:12:49.792 "name": null, 00:12:49.792 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:49.792 "is_configured": false, 00:12:49.792 "data_offset": 0, 00:12:49.792 "data_size": 65536 00:12:49.792 }, 00:12:49.792 { 00:12:49.792 "name": "BaseBdev3", 00:12:49.792 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:49.792 "is_configured": true, 00:12:49.792 "data_offset": 0, 00:12:49.792 "data_size": 65536 00:12:49.792 }, 00:12:49.792 { 00:12:49.792 "name": "BaseBdev4", 00:12:49.792 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:49.792 "is_configured": true, 00:12:49.792 "data_offset": 0, 00:12:49.792 "data_size": 65536 00:12:49.792 } 00:12:49.792 ] 00:12:49.792 }' 00:12:49.792 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.792 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.051 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.310 [2024-11-15 11:24:33.011248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.310 BaseBdev1 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.310 [ 00:12:50.310 { 00:12:50.310 "name": "BaseBdev1", 00:12:50.310 "aliases": [ 00:12:50.310 "641a7c47-badb-42c1-82f9-865d50cd6d75" 00:12:50.310 ], 00:12:50.310 "product_name": "Malloc disk", 00:12:50.310 "block_size": 512, 00:12:50.310 "num_blocks": 65536, 00:12:50.310 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:50.310 "assigned_rate_limits": { 00:12:50.310 "rw_ios_per_sec": 0, 00:12:50.310 "rw_mbytes_per_sec": 0, 00:12:50.310 "r_mbytes_per_sec": 0, 00:12:50.310 "w_mbytes_per_sec": 0 00:12:50.310 }, 00:12:50.310 "claimed": true, 00:12:50.310 "claim_type": "exclusive_write", 00:12:50.310 "zoned": false, 00:12:50.310 "supported_io_types": { 00:12:50.310 "read": true, 00:12:50.310 "write": true, 00:12:50.310 "unmap": true, 00:12:50.310 "flush": true, 00:12:50.310 "reset": true, 00:12:50.310 "nvme_admin": false, 00:12:50.310 "nvme_io": false, 00:12:50.310 "nvme_io_md": false, 00:12:50.310 "write_zeroes": true, 00:12:50.310 "zcopy": true, 00:12:50.310 "get_zone_info": false, 00:12:50.310 "zone_management": false, 00:12:50.310 "zone_append": false, 00:12:50.310 "compare": false, 00:12:50.310 "compare_and_write": false, 00:12:50.310 "abort": true, 00:12:50.310 "seek_hole": false, 00:12:50.310 "seek_data": false, 00:12:50.310 "copy": true, 00:12:50.310 "nvme_iov_md": false 00:12:50.310 }, 00:12:50.310 "memory_domains": [ 00:12:50.310 { 00:12:50.310 "dma_device_id": "system", 00:12:50.310 "dma_device_type": 1 00:12:50.310 }, 00:12:50.310 { 00:12:50.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.310 "dma_device_type": 2 00:12:50.310 } 00:12:50.310 ], 00:12:50.310 "driver_specific": {} 00:12:50.310 } 00:12:50.310 ] 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.310 "name": "Existed_Raid", 00:12:50.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.310 "strip_size_kb": 0, 00:12:50.310 "state": "configuring", 00:12:50.310 "raid_level": "raid1", 00:12:50.310 "superblock": false, 00:12:50.310 "num_base_bdevs": 4, 00:12:50.310 "num_base_bdevs_discovered": 3, 00:12:50.310 "num_base_bdevs_operational": 4, 00:12:50.310 "base_bdevs_list": [ 00:12:50.310 { 00:12:50.310 "name": "BaseBdev1", 00:12:50.310 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:50.310 "is_configured": true, 00:12:50.310 "data_offset": 0, 00:12:50.310 "data_size": 65536 00:12:50.310 }, 00:12:50.310 { 00:12:50.310 "name": null, 00:12:50.310 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:50.310 "is_configured": false, 00:12:50.310 "data_offset": 0, 00:12:50.310 "data_size": 65536 00:12:50.310 }, 00:12:50.310 { 00:12:50.310 "name": "BaseBdev3", 00:12:50.310 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:50.310 "is_configured": true, 00:12:50.310 "data_offset": 0, 00:12:50.310 "data_size": 65536 00:12:50.310 }, 00:12:50.310 { 00:12:50.310 "name": "BaseBdev4", 00:12:50.310 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:50.310 "is_configured": true, 00:12:50.310 "data_offset": 0, 00:12:50.310 "data_size": 65536 00:12:50.310 } 00:12:50.310 ] 00:12:50.310 }' 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.310 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.877 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.877 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.877 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.877 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.878 [2024-11-15 11:24:33.607606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.878 "name": "Existed_Raid", 00:12:50.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.878 "strip_size_kb": 0, 00:12:50.878 "state": "configuring", 00:12:50.878 "raid_level": "raid1", 00:12:50.878 "superblock": false, 00:12:50.878 "num_base_bdevs": 4, 00:12:50.878 "num_base_bdevs_discovered": 2, 00:12:50.878 "num_base_bdevs_operational": 4, 00:12:50.878 "base_bdevs_list": [ 00:12:50.878 { 00:12:50.878 "name": "BaseBdev1", 00:12:50.878 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:50.878 "is_configured": true, 00:12:50.878 "data_offset": 0, 00:12:50.878 "data_size": 65536 00:12:50.878 }, 00:12:50.878 { 00:12:50.878 "name": null, 00:12:50.878 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:50.878 "is_configured": false, 00:12:50.878 "data_offset": 0, 00:12:50.878 "data_size": 65536 00:12:50.878 }, 00:12:50.878 { 00:12:50.878 "name": null, 00:12:50.878 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:50.878 "is_configured": false, 00:12:50.878 "data_offset": 0, 00:12:50.878 "data_size": 65536 00:12:50.878 }, 00:12:50.878 { 00:12:50.878 "name": "BaseBdev4", 00:12:50.878 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:50.878 "is_configured": true, 00:12:50.878 "data_offset": 0, 00:12:50.878 "data_size": 65536 00:12:50.878 } 00:12:50.878 ] 00:12:50.878 }' 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.878 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.444 [2024-11-15 11:24:34.207751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.444 "name": "Existed_Raid", 00:12:51.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.444 "strip_size_kb": 0, 00:12:51.444 "state": "configuring", 00:12:51.444 "raid_level": "raid1", 00:12:51.444 "superblock": false, 00:12:51.444 "num_base_bdevs": 4, 00:12:51.444 "num_base_bdevs_discovered": 3, 00:12:51.444 "num_base_bdevs_operational": 4, 00:12:51.444 "base_bdevs_list": [ 00:12:51.444 { 00:12:51.444 "name": "BaseBdev1", 00:12:51.444 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:51.444 "is_configured": true, 00:12:51.444 "data_offset": 0, 00:12:51.444 "data_size": 65536 00:12:51.444 }, 00:12:51.444 { 00:12:51.444 "name": null, 00:12:51.444 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:51.444 "is_configured": false, 00:12:51.444 "data_offset": 0, 00:12:51.444 "data_size": 65536 00:12:51.444 }, 00:12:51.444 { 00:12:51.444 "name": "BaseBdev3", 00:12:51.444 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:51.444 "is_configured": true, 00:12:51.444 "data_offset": 0, 00:12:51.444 "data_size": 65536 00:12:51.444 }, 00:12:51.444 { 00:12:51.444 "name": "BaseBdev4", 00:12:51.444 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:51.444 "is_configured": true, 00:12:51.444 "data_offset": 0, 00:12:51.444 "data_size": 65536 00:12:51.444 } 00:12:51.444 ] 00:12:51.444 }' 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.444 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.011 [2024-11-15 11:24:34.787936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.011 "name": "Existed_Raid", 00:12:52.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.011 "strip_size_kb": 0, 00:12:52.011 "state": "configuring", 00:12:52.011 "raid_level": "raid1", 00:12:52.011 "superblock": false, 00:12:52.011 "num_base_bdevs": 4, 00:12:52.011 "num_base_bdevs_discovered": 2, 00:12:52.011 "num_base_bdevs_operational": 4, 00:12:52.011 "base_bdevs_list": [ 00:12:52.011 { 00:12:52.011 "name": null, 00:12:52.011 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:52.011 "is_configured": false, 00:12:52.011 "data_offset": 0, 00:12:52.011 "data_size": 65536 00:12:52.011 }, 00:12:52.011 { 00:12:52.011 "name": null, 00:12:52.011 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:52.011 "is_configured": false, 00:12:52.011 "data_offset": 0, 00:12:52.011 "data_size": 65536 00:12:52.011 }, 00:12:52.011 { 00:12:52.011 "name": "BaseBdev3", 00:12:52.011 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:52.011 "is_configured": true, 00:12:52.011 "data_offset": 0, 00:12:52.011 "data_size": 65536 00:12:52.011 }, 00:12:52.011 { 00:12:52.011 "name": "BaseBdev4", 00:12:52.011 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:52.011 "is_configured": true, 00:12:52.011 "data_offset": 0, 00:12:52.011 "data_size": 65536 00:12:52.011 } 00:12:52.011 ] 00:12:52.011 }' 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.011 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.578 [2024-11-15 11:24:35.426616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.578 "name": "Existed_Raid", 00:12:52.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.578 "strip_size_kb": 0, 00:12:52.578 "state": "configuring", 00:12:52.578 "raid_level": "raid1", 00:12:52.578 "superblock": false, 00:12:52.578 "num_base_bdevs": 4, 00:12:52.578 "num_base_bdevs_discovered": 3, 00:12:52.578 "num_base_bdevs_operational": 4, 00:12:52.578 "base_bdevs_list": [ 00:12:52.578 { 00:12:52.578 "name": null, 00:12:52.578 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:52.578 "is_configured": false, 00:12:52.578 "data_offset": 0, 00:12:52.578 "data_size": 65536 00:12:52.578 }, 00:12:52.578 { 00:12:52.578 "name": "BaseBdev2", 00:12:52.578 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:52.578 "is_configured": true, 00:12:52.578 "data_offset": 0, 00:12:52.578 "data_size": 65536 00:12:52.578 }, 00:12:52.578 { 00:12:52.578 "name": "BaseBdev3", 00:12:52.578 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:52.578 "is_configured": true, 00:12:52.578 "data_offset": 0, 00:12:52.578 "data_size": 65536 00:12:52.578 }, 00:12:52.578 { 00:12:52.578 "name": "BaseBdev4", 00:12:52.578 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:52.578 "is_configured": true, 00:12:52.578 "data_offset": 0, 00:12:52.578 "data_size": 65536 00:12:52.578 } 00:12:52.578 ] 00:12:52.578 }' 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.578 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.145 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.145 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 641a7c47-badb-42c1-82f9-865d50cd6d75 00:12:53.145 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.145 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 [2024-11-15 11:24:36.083865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:53.145 [2024-11-15 11:24:36.083920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:53.145 [2024-11-15 11:24:36.083936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:53.145 [2024-11-15 11:24:36.084324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:53.146 [2024-11-15 11:24:36.084549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:53.146 [2024-11-15 11:24:36.084567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:53.146 [2024-11-15 11:24:36.084944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.146 NewBaseBdev 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.146 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.405 [ 00:12:53.405 { 00:12:53.405 "name": "NewBaseBdev", 00:12:53.405 "aliases": [ 00:12:53.405 "641a7c47-badb-42c1-82f9-865d50cd6d75" 00:12:53.405 ], 00:12:53.405 "product_name": "Malloc disk", 00:12:53.405 "block_size": 512, 00:12:53.405 "num_blocks": 65536, 00:12:53.405 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:53.405 "assigned_rate_limits": { 00:12:53.405 "rw_ios_per_sec": 0, 00:12:53.405 "rw_mbytes_per_sec": 0, 00:12:53.405 "r_mbytes_per_sec": 0, 00:12:53.405 "w_mbytes_per_sec": 0 00:12:53.405 }, 00:12:53.405 "claimed": true, 00:12:53.405 "claim_type": "exclusive_write", 00:12:53.405 "zoned": false, 00:12:53.405 "supported_io_types": { 00:12:53.405 "read": true, 00:12:53.405 "write": true, 00:12:53.405 "unmap": true, 00:12:53.405 "flush": true, 00:12:53.405 "reset": true, 00:12:53.405 "nvme_admin": false, 00:12:53.405 "nvme_io": false, 00:12:53.405 "nvme_io_md": false, 00:12:53.405 "write_zeroes": true, 00:12:53.405 "zcopy": true, 00:12:53.405 "get_zone_info": false, 00:12:53.405 "zone_management": false, 00:12:53.405 "zone_append": false, 00:12:53.405 "compare": false, 00:12:53.405 "compare_and_write": false, 00:12:53.405 "abort": true, 00:12:53.405 "seek_hole": false, 00:12:53.405 "seek_data": false, 00:12:53.405 "copy": true, 00:12:53.405 "nvme_iov_md": false 00:12:53.405 }, 00:12:53.405 "memory_domains": [ 00:12:53.405 { 00:12:53.405 "dma_device_id": "system", 00:12:53.405 "dma_device_type": 1 00:12:53.405 }, 00:12:53.405 { 00:12:53.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.405 "dma_device_type": 2 00:12:53.405 } 00:12:53.405 ], 00:12:53.405 "driver_specific": {} 00:12:53.405 } 00:12:53.405 ] 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.405 "name": "Existed_Raid", 00:12:53.405 "uuid": "718a292e-0f20-42ba-a14d-c7d5804d535a", 00:12:53.405 "strip_size_kb": 0, 00:12:53.405 "state": "online", 00:12:53.405 "raid_level": "raid1", 00:12:53.405 "superblock": false, 00:12:53.405 "num_base_bdevs": 4, 00:12:53.405 "num_base_bdevs_discovered": 4, 00:12:53.405 "num_base_bdevs_operational": 4, 00:12:53.405 "base_bdevs_list": [ 00:12:53.405 { 00:12:53.405 "name": "NewBaseBdev", 00:12:53.405 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:53.405 "is_configured": true, 00:12:53.405 "data_offset": 0, 00:12:53.405 "data_size": 65536 00:12:53.405 }, 00:12:53.405 { 00:12:53.405 "name": "BaseBdev2", 00:12:53.405 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:53.405 "is_configured": true, 00:12:53.405 "data_offset": 0, 00:12:53.405 "data_size": 65536 00:12:53.405 }, 00:12:53.405 { 00:12:53.405 "name": "BaseBdev3", 00:12:53.405 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:53.405 "is_configured": true, 00:12:53.405 "data_offset": 0, 00:12:53.405 "data_size": 65536 00:12:53.405 }, 00:12:53.405 { 00:12:53.405 "name": "BaseBdev4", 00:12:53.405 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:53.405 "is_configured": true, 00:12:53.405 "data_offset": 0, 00:12:53.405 "data_size": 65536 00:12:53.405 } 00:12:53.405 ] 00:12:53.405 }' 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.405 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.972 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.973 [2024-11-15 11:24:36.636482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:53.973 "name": "Existed_Raid", 00:12:53.973 "aliases": [ 00:12:53.973 "718a292e-0f20-42ba-a14d-c7d5804d535a" 00:12:53.973 ], 00:12:53.973 "product_name": "Raid Volume", 00:12:53.973 "block_size": 512, 00:12:53.973 "num_blocks": 65536, 00:12:53.973 "uuid": "718a292e-0f20-42ba-a14d-c7d5804d535a", 00:12:53.973 "assigned_rate_limits": { 00:12:53.973 "rw_ios_per_sec": 0, 00:12:53.973 "rw_mbytes_per_sec": 0, 00:12:53.973 "r_mbytes_per_sec": 0, 00:12:53.973 "w_mbytes_per_sec": 0 00:12:53.973 }, 00:12:53.973 "claimed": false, 00:12:53.973 "zoned": false, 00:12:53.973 "supported_io_types": { 00:12:53.973 "read": true, 00:12:53.973 "write": true, 00:12:53.973 "unmap": false, 00:12:53.973 "flush": false, 00:12:53.973 "reset": true, 00:12:53.973 "nvme_admin": false, 00:12:53.973 "nvme_io": false, 00:12:53.973 "nvme_io_md": false, 00:12:53.973 "write_zeroes": true, 00:12:53.973 "zcopy": false, 00:12:53.973 "get_zone_info": false, 00:12:53.973 "zone_management": false, 00:12:53.973 "zone_append": false, 00:12:53.973 "compare": false, 00:12:53.973 "compare_and_write": false, 00:12:53.973 "abort": false, 00:12:53.973 "seek_hole": false, 00:12:53.973 "seek_data": false, 00:12:53.973 "copy": false, 00:12:53.973 "nvme_iov_md": false 00:12:53.973 }, 00:12:53.973 "memory_domains": [ 00:12:53.973 { 00:12:53.973 "dma_device_id": "system", 00:12:53.973 "dma_device_type": 1 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.973 "dma_device_type": 2 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "dma_device_id": "system", 00:12:53.973 "dma_device_type": 1 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.973 "dma_device_type": 2 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "dma_device_id": "system", 00:12:53.973 "dma_device_type": 1 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.973 "dma_device_type": 2 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "dma_device_id": "system", 00:12:53.973 "dma_device_type": 1 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.973 "dma_device_type": 2 00:12:53.973 } 00:12:53.973 ], 00:12:53.973 "driver_specific": { 00:12:53.973 "raid": { 00:12:53.973 "uuid": "718a292e-0f20-42ba-a14d-c7d5804d535a", 00:12:53.973 "strip_size_kb": 0, 00:12:53.973 "state": "online", 00:12:53.973 "raid_level": "raid1", 00:12:53.973 "superblock": false, 00:12:53.973 "num_base_bdevs": 4, 00:12:53.973 "num_base_bdevs_discovered": 4, 00:12:53.973 "num_base_bdevs_operational": 4, 00:12:53.973 "base_bdevs_list": [ 00:12:53.973 { 00:12:53.973 "name": "NewBaseBdev", 00:12:53.973 "uuid": "641a7c47-badb-42c1-82f9-865d50cd6d75", 00:12:53.973 "is_configured": true, 00:12:53.973 "data_offset": 0, 00:12:53.973 "data_size": 65536 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "name": "BaseBdev2", 00:12:53.973 "uuid": "5494a3b2-ddfe-485c-b06f-dc34e5a376df", 00:12:53.973 "is_configured": true, 00:12:53.973 "data_offset": 0, 00:12:53.973 "data_size": 65536 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "name": "BaseBdev3", 00:12:53.973 "uuid": "e05c6885-f618-4dfb-99ed-9a212119ee02", 00:12:53.973 "is_configured": true, 00:12:53.973 "data_offset": 0, 00:12:53.973 "data_size": 65536 00:12:53.973 }, 00:12:53.973 { 00:12:53.973 "name": "BaseBdev4", 00:12:53.973 "uuid": "c862e07b-105b-4f31-9975-5db118c41019", 00:12:53.973 "is_configured": true, 00:12:53.973 "data_offset": 0, 00:12:53.973 "data_size": 65536 00:12:53.973 } 00:12:53.973 ] 00:12:53.973 } 00:12:53.973 } 00:12:53.973 }' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:53.973 BaseBdev2 00:12:53.973 BaseBdev3 00:12:53.973 BaseBdev4' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.973 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.232 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.233 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.233 [2024-11-15 11:24:37.004088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.233 [2024-11-15 11:24:37.004118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.233 [2024-11-15 11:24:37.004261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.233 [2024-11-15 11:24:37.004672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.233 [2024-11-15 11:24:37.004694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73195 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73195 ']' 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73195 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73195 00:12:54.233 killing process with pid 73195 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73195' 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73195 00:12:54.233 [2024-11-15 11:24:37.042636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.233 11:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73195 00:12:54.492 [2024-11-15 11:24:37.399609] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:55.869 00:12:55.869 real 0m12.889s 00:12:55.869 user 0m21.277s 00:12:55.869 sys 0m1.887s 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 ************************************ 00:12:55.869 END TEST raid_state_function_test 00:12:55.869 ************************************ 00:12:55.869 11:24:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:55.869 11:24:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:55.869 11:24:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:55.869 11:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 ************************************ 00:12:55.869 START TEST raid_state_function_test_sb 00:12:55.869 ************************************ 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73876 00:12:55.869 Process raid pid: 73876 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73876' 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73876 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73876 ']' 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:55.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:55.869 11:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 [2024-11-15 11:24:38.655131] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:12:55.869 [2024-11-15 11:24:38.655328] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.128 [2024-11-15 11:24:38.843966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.128 [2024-11-15 11:24:38.984155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.386 [2024-11-15 11:24:39.200410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.386 [2024-11-15 11:24:39.200461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.953 [2024-11-15 11:24:39.627410] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:56.953 [2024-11-15 11:24:39.627490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:56.953 [2024-11-15 11:24:39.627513] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.953 [2024-11-15 11:24:39.627532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.953 [2024-11-15 11:24:39.627543] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.953 [2024-11-15 11:24:39.627558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.953 [2024-11-15 11:24:39.627569] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:56.953 [2024-11-15 11:24:39.627585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.953 "name": "Existed_Raid", 00:12:56.953 "uuid": "3659cdfa-3c3e-43ed-99b0-805248398f30", 00:12:56.953 "strip_size_kb": 0, 00:12:56.953 "state": "configuring", 00:12:56.953 "raid_level": "raid1", 00:12:56.953 "superblock": true, 00:12:56.953 "num_base_bdevs": 4, 00:12:56.953 "num_base_bdevs_discovered": 0, 00:12:56.953 "num_base_bdevs_operational": 4, 00:12:56.953 "base_bdevs_list": [ 00:12:56.953 { 00:12:56.953 "name": "BaseBdev1", 00:12:56.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.953 "is_configured": false, 00:12:56.953 "data_offset": 0, 00:12:56.953 "data_size": 0 00:12:56.953 }, 00:12:56.953 { 00:12:56.953 "name": "BaseBdev2", 00:12:56.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.953 "is_configured": false, 00:12:56.953 "data_offset": 0, 00:12:56.953 "data_size": 0 00:12:56.953 }, 00:12:56.953 { 00:12:56.953 "name": "BaseBdev3", 00:12:56.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.953 "is_configured": false, 00:12:56.953 "data_offset": 0, 00:12:56.953 "data_size": 0 00:12:56.953 }, 00:12:56.953 { 00:12:56.953 "name": "BaseBdev4", 00:12:56.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.953 "is_configured": false, 00:12:56.953 "data_offset": 0, 00:12:56.953 "data_size": 0 00:12:56.953 } 00:12:56.953 ] 00:12:56.953 }' 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.953 11:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.212 [2024-11-15 11:24:40.135507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:57.212 [2024-11-15 11:24:40.135583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.212 [2024-11-15 11:24:40.143502] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:57.212 [2024-11-15 11:24:40.143554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:57.212 [2024-11-15 11:24:40.143584] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:57.212 [2024-11-15 11:24:40.143615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:57.212 [2024-11-15 11:24:40.143624] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:57.212 [2024-11-15 11:24:40.143638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:57.212 [2024-11-15 11:24:40.143647] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:57.212 [2024-11-15 11:24:40.143661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.212 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.471 [2024-11-15 11:24:40.191219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.471 BaseBdev1 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.471 [ 00:12:57.471 { 00:12:57.471 "name": "BaseBdev1", 00:12:57.471 "aliases": [ 00:12:57.471 "2e7c5dfe-65ef-404f-8d9a-3d44ffea7df2" 00:12:57.471 ], 00:12:57.471 "product_name": "Malloc disk", 00:12:57.471 "block_size": 512, 00:12:57.471 "num_blocks": 65536, 00:12:57.471 "uuid": "2e7c5dfe-65ef-404f-8d9a-3d44ffea7df2", 00:12:57.471 "assigned_rate_limits": { 00:12:57.471 "rw_ios_per_sec": 0, 00:12:57.471 "rw_mbytes_per_sec": 0, 00:12:57.471 "r_mbytes_per_sec": 0, 00:12:57.471 "w_mbytes_per_sec": 0 00:12:57.471 }, 00:12:57.471 "claimed": true, 00:12:57.471 "claim_type": "exclusive_write", 00:12:57.471 "zoned": false, 00:12:57.471 "supported_io_types": { 00:12:57.471 "read": true, 00:12:57.471 "write": true, 00:12:57.471 "unmap": true, 00:12:57.471 "flush": true, 00:12:57.471 "reset": true, 00:12:57.471 "nvme_admin": false, 00:12:57.471 "nvme_io": false, 00:12:57.471 "nvme_io_md": false, 00:12:57.471 "write_zeroes": true, 00:12:57.471 "zcopy": true, 00:12:57.471 "get_zone_info": false, 00:12:57.471 "zone_management": false, 00:12:57.471 "zone_append": false, 00:12:57.471 "compare": false, 00:12:57.471 "compare_and_write": false, 00:12:57.471 "abort": true, 00:12:57.471 "seek_hole": false, 00:12:57.471 "seek_data": false, 00:12:57.471 "copy": true, 00:12:57.471 "nvme_iov_md": false 00:12:57.471 }, 00:12:57.471 "memory_domains": [ 00:12:57.471 { 00:12:57.471 "dma_device_id": "system", 00:12:57.471 "dma_device_type": 1 00:12:57.471 }, 00:12:57.471 { 00:12:57.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.471 "dma_device_type": 2 00:12:57.471 } 00:12:57.471 ], 00:12:57.471 "driver_specific": {} 00:12:57.471 } 00:12:57.471 ] 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.471 "name": "Existed_Raid", 00:12:57.471 "uuid": "251893d4-88ee-4406-9c78-41a045aca4ec", 00:12:57.471 "strip_size_kb": 0, 00:12:57.471 "state": "configuring", 00:12:57.471 "raid_level": "raid1", 00:12:57.471 "superblock": true, 00:12:57.471 "num_base_bdevs": 4, 00:12:57.471 "num_base_bdevs_discovered": 1, 00:12:57.471 "num_base_bdevs_operational": 4, 00:12:57.471 "base_bdevs_list": [ 00:12:57.471 { 00:12:57.471 "name": "BaseBdev1", 00:12:57.471 "uuid": "2e7c5dfe-65ef-404f-8d9a-3d44ffea7df2", 00:12:57.471 "is_configured": true, 00:12:57.471 "data_offset": 2048, 00:12:57.471 "data_size": 63488 00:12:57.471 }, 00:12:57.471 { 00:12:57.471 "name": "BaseBdev2", 00:12:57.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.471 "is_configured": false, 00:12:57.471 "data_offset": 0, 00:12:57.471 "data_size": 0 00:12:57.471 }, 00:12:57.471 { 00:12:57.471 "name": "BaseBdev3", 00:12:57.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.471 "is_configured": false, 00:12:57.471 "data_offset": 0, 00:12:57.471 "data_size": 0 00:12:57.471 }, 00:12:57.471 { 00:12:57.471 "name": "BaseBdev4", 00:12:57.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.471 "is_configured": false, 00:12:57.471 "data_offset": 0, 00:12:57.471 "data_size": 0 00:12:57.471 } 00:12:57.471 ] 00:12:57.471 }' 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.471 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.057 [2024-11-15 11:24:40.739548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:58.057 [2024-11-15 11:24:40.739636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.057 [2024-11-15 11:24:40.747599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.057 [2024-11-15 11:24:40.750280] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.057 [2024-11-15 11:24:40.750337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.057 [2024-11-15 11:24:40.750356] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.057 [2024-11-15 11:24:40.750374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.057 [2024-11-15 11:24:40.750385] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:58.057 [2024-11-15 11:24:40.750400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.057 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.058 "name": "Existed_Raid", 00:12:58.058 "uuid": "dc460c49-631b-4351-9305-6ada77d295a8", 00:12:58.058 "strip_size_kb": 0, 00:12:58.058 "state": "configuring", 00:12:58.058 "raid_level": "raid1", 00:12:58.058 "superblock": true, 00:12:58.058 "num_base_bdevs": 4, 00:12:58.058 "num_base_bdevs_discovered": 1, 00:12:58.058 "num_base_bdevs_operational": 4, 00:12:58.058 "base_bdevs_list": [ 00:12:58.058 { 00:12:58.058 "name": "BaseBdev1", 00:12:58.058 "uuid": "2e7c5dfe-65ef-404f-8d9a-3d44ffea7df2", 00:12:58.058 "is_configured": true, 00:12:58.058 "data_offset": 2048, 00:12:58.058 "data_size": 63488 00:12:58.058 }, 00:12:58.058 { 00:12:58.058 "name": "BaseBdev2", 00:12:58.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.058 "is_configured": false, 00:12:58.058 "data_offset": 0, 00:12:58.058 "data_size": 0 00:12:58.058 }, 00:12:58.058 { 00:12:58.058 "name": "BaseBdev3", 00:12:58.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.058 "is_configured": false, 00:12:58.058 "data_offset": 0, 00:12:58.058 "data_size": 0 00:12:58.058 }, 00:12:58.058 { 00:12:58.058 "name": "BaseBdev4", 00:12:58.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.058 "is_configured": false, 00:12:58.058 "data_offset": 0, 00:12:58.058 "data_size": 0 00:12:58.058 } 00:12:58.058 ] 00:12:58.058 }' 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.058 11:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.626 [2024-11-15 11:24:41.316327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.626 BaseBdev2 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.626 [ 00:12:58.626 { 00:12:58.626 "name": "BaseBdev2", 00:12:58.626 "aliases": [ 00:12:58.626 "99900d17-a42e-4a52-97a0-038b4ee8c4ac" 00:12:58.626 ], 00:12:58.626 "product_name": "Malloc disk", 00:12:58.626 "block_size": 512, 00:12:58.626 "num_blocks": 65536, 00:12:58.626 "uuid": "99900d17-a42e-4a52-97a0-038b4ee8c4ac", 00:12:58.626 "assigned_rate_limits": { 00:12:58.626 "rw_ios_per_sec": 0, 00:12:58.626 "rw_mbytes_per_sec": 0, 00:12:58.626 "r_mbytes_per_sec": 0, 00:12:58.626 "w_mbytes_per_sec": 0 00:12:58.626 }, 00:12:58.626 "claimed": true, 00:12:58.626 "claim_type": "exclusive_write", 00:12:58.626 "zoned": false, 00:12:58.626 "supported_io_types": { 00:12:58.626 "read": true, 00:12:58.626 "write": true, 00:12:58.626 "unmap": true, 00:12:58.626 "flush": true, 00:12:58.626 "reset": true, 00:12:58.626 "nvme_admin": false, 00:12:58.626 "nvme_io": false, 00:12:58.626 "nvme_io_md": false, 00:12:58.626 "write_zeroes": true, 00:12:58.626 "zcopy": true, 00:12:58.626 "get_zone_info": false, 00:12:58.626 "zone_management": false, 00:12:58.626 "zone_append": false, 00:12:58.626 "compare": false, 00:12:58.626 "compare_and_write": false, 00:12:58.626 "abort": true, 00:12:58.626 "seek_hole": false, 00:12:58.626 "seek_data": false, 00:12:58.626 "copy": true, 00:12:58.626 "nvme_iov_md": false 00:12:58.626 }, 00:12:58.626 "memory_domains": [ 00:12:58.626 { 00:12:58.626 "dma_device_id": "system", 00:12:58.626 "dma_device_type": 1 00:12:58.626 }, 00:12:58.626 { 00:12:58.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.626 "dma_device_type": 2 00:12:58.626 } 00:12:58.626 ], 00:12:58.626 "driver_specific": {} 00:12:58.626 } 00:12:58.626 ] 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.626 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.627 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.627 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.627 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.627 "name": "Existed_Raid", 00:12:58.627 "uuid": "dc460c49-631b-4351-9305-6ada77d295a8", 00:12:58.627 "strip_size_kb": 0, 00:12:58.627 "state": "configuring", 00:12:58.627 "raid_level": "raid1", 00:12:58.627 "superblock": true, 00:12:58.627 "num_base_bdevs": 4, 00:12:58.627 "num_base_bdevs_discovered": 2, 00:12:58.627 "num_base_bdevs_operational": 4, 00:12:58.627 "base_bdevs_list": [ 00:12:58.627 { 00:12:58.627 "name": "BaseBdev1", 00:12:58.627 "uuid": "2e7c5dfe-65ef-404f-8d9a-3d44ffea7df2", 00:12:58.627 "is_configured": true, 00:12:58.627 "data_offset": 2048, 00:12:58.627 "data_size": 63488 00:12:58.627 }, 00:12:58.627 { 00:12:58.627 "name": "BaseBdev2", 00:12:58.627 "uuid": "99900d17-a42e-4a52-97a0-038b4ee8c4ac", 00:12:58.627 "is_configured": true, 00:12:58.627 "data_offset": 2048, 00:12:58.627 "data_size": 63488 00:12:58.627 }, 00:12:58.627 { 00:12:58.627 "name": "BaseBdev3", 00:12:58.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.627 "is_configured": false, 00:12:58.627 "data_offset": 0, 00:12:58.627 "data_size": 0 00:12:58.627 }, 00:12:58.627 { 00:12:58.627 "name": "BaseBdev4", 00:12:58.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.627 "is_configured": false, 00:12:58.627 "data_offset": 0, 00:12:58.627 "data_size": 0 00:12:58.627 } 00:12:58.627 ] 00:12:58.627 }' 00:12:58.627 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.627 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.194 [2024-11-15 11:24:41.906683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.194 BaseBdev3 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.194 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.195 [ 00:12:59.195 { 00:12:59.195 "name": "BaseBdev3", 00:12:59.195 "aliases": [ 00:12:59.195 "e6927775-1a32-4a04-bbfe-d6995430b2e7" 00:12:59.195 ], 00:12:59.195 "product_name": "Malloc disk", 00:12:59.195 "block_size": 512, 00:12:59.195 "num_blocks": 65536, 00:12:59.195 "uuid": "e6927775-1a32-4a04-bbfe-d6995430b2e7", 00:12:59.195 "assigned_rate_limits": { 00:12:59.195 "rw_ios_per_sec": 0, 00:12:59.195 "rw_mbytes_per_sec": 0, 00:12:59.195 "r_mbytes_per_sec": 0, 00:12:59.195 "w_mbytes_per_sec": 0 00:12:59.195 }, 00:12:59.195 "claimed": true, 00:12:59.195 "claim_type": "exclusive_write", 00:12:59.195 "zoned": false, 00:12:59.195 "supported_io_types": { 00:12:59.195 "read": true, 00:12:59.195 "write": true, 00:12:59.195 "unmap": true, 00:12:59.195 "flush": true, 00:12:59.195 "reset": true, 00:12:59.195 "nvme_admin": false, 00:12:59.195 "nvme_io": false, 00:12:59.195 "nvme_io_md": false, 00:12:59.195 "write_zeroes": true, 00:12:59.195 "zcopy": true, 00:12:59.195 "get_zone_info": false, 00:12:59.195 "zone_management": false, 00:12:59.195 "zone_append": false, 00:12:59.195 "compare": false, 00:12:59.195 "compare_and_write": false, 00:12:59.195 "abort": true, 00:12:59.195 "seek_hole": false, 00:12:59.195 "seek_data": false, 00:12:59.195 "copy": true, 00:12:59.195 "nvme_iov_md": false 00:12:59.195 }, 00:12:59.195 "memory_domains": [ 00:12:59.195 { 00:12:59.195 "dma_device_id": "system", 00:12:59.195 "dma_device_type": 1 00:12:59.195 }, 00:12:59.195 { 00:12:59.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.195 "dma_device_type": 2 00:12:59.195 } 00:12:59.195 ], 00:12:59.195 "driver_specific": {} 00:12:59.195 } 00:12:59.195 ] 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.195 "name": "Existed_Raid", 00:12:59.195 "uuid": "dc460c49-631b-4351-9305-6ada77d295a8", 00:12:59.195 "strip_size_kb": 0, 00:12:59.195 "state": "configuring", 00:12:59.195 "raid_level": "raid1", 00:12:59.195 "superblock": true, 00:12:59.195 "num_base_bdevs": 4, 00:12:59.195 "num_base_bdevs_discovered": 3, 00:12:59.195 "num_base_bdevs_operational": 4, 00:12:59.195 "base_bdevs_list": [ 00:12:59.195 { 00:12:59.195 "name": "BaseBdev1", 00:12:59.195 "uuid": "2e7c5dfe-65ef-404f-8d9a-3d44ffea7df2", 00:12:59.195 "is_configured": true, 00:12:59.195 "data_offset": 2048, 00:12:59.195 "data_size": 63488 00:12:59.195 }, 00:12:59.195 { 00:12:59.195 "name": "BaseBdev2", 00:12:59.195 "uuid": "99900d17-a42e-4a52-97a0-038b4ee8c4ac", 00:12:59.195 "is_configured": true, 00:12:59.195 "data_offset": 2048, 00:12:59.195 "data_size": 63488 00:12:59.195 }, 00:12:59.195 { 00:12:59.195 "name": "BaseBdev3", 00:12:59.195 "uuid": "e6927775-1a32-4a04-bbfe-d6995430b2e7", 00:12:59.195 "is_configured": true, 00:12:59.195 "data_offset": 2048, 00:12:59.195 "data_size": 63488 00:12:59.195 }, 00:12:59.195 { 00:12:59.195 "name": "BaseBdev4", 00:12:59.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.195 "is_configured": false, 00:12:59.195 "data_offset": 0, 00:12:59.195 "data_size": 0 00:12:59.195 } 00:12:59.195 ] 00:12:59.195 }' 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.195 11:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.762 [2024-11-15 11:24:42.494818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.762 [2024-11-15 11:24:42.495431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:59.762 [2024-11-15 11:24:42.495458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.762 BaseBdev4 00:12:59.762 [2024-11-15 11:24:42.495874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:59.762 [2024-11-15 11:24:42.496101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:59.762 [2024-11-15 11:24:42.496139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:59.762 [2024-11-15 11:24:42.496381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.762 [ 00:12:59.762 { 00:12:59.762 "name": "BaseBdev4", 00:12:59.762 "aliases": [ 00:12:59.762 "235fa620-dec1-4661-869d-f4253816e7ea" 00:12:59.762 ], 00:12:59.762 "product_name": "Malloc disk", 00:12:59.762 "block_size": 512, 00:12:59.762 "num_blocks": 65536, 00:12:59.762 "uuid": "235fa620-dec1-4661-869d-f4253816e7ea", 00:12:59.762 "assigned_rate_limits": { 00:12:59.762 "rw_ios_per_sec": 0, 00:12:59.762 "rw_mbytes_per_sec": 0, 00:12:59.762 "r_mbytes_per_sec": 0, 00:12:59.762 "w_mbytes_per_sec": 0 00:12:59.762 }, 00:12:59.762 "claimed": true, 00:12:59.762 "claim_type": "exclusive_write", 00:12:59.762 "zoned": false, 00:12:59.762 "supported_io_types": { 00:12:59.762 "read": true, 00:12:59.762 "write": true, 00:12:59.762 "unmap": true, 00:12:59.762 "flush": true, 00:12:59.762 "reset": true, 00:12:59.762 "nvme_admin": false, 00:12:59.762 "nvme_io": false, 00:12:59.762 "nvme_io_md": false, 00:12:59.762 "write_zeroes": true, 00:12:59.762 "zcopy": true, 00:12:59.762 "get_zone_info": false, 00:12:59.762 "zone_management": false, 00:12:59.762 "zone_append": false, 00:12:59.762 "compare": false, 00:12:59.762 "compare_and_write": false, 00:12:59.762 "abort": true, 00:12:59.762 "seek_hole": false, 00:12:59.762 "seek_data": false, 00:12:59.762 "copy": true, 00:12:59.762 "nvme_iov_md": false 00:12:59.762 }, 00:12:59.762 "memory_domains": [ 00:12:59.762 { 00:12:59.762 "dma_device_id": "system", 00:12:59.762 "dma_device_type": 1 00:12:59.762 }, 00:12:59.762 { 00:12:59.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.762 "dma_device_type": 2 00:12:59.762 } 00:12:59.762 ], 00:12:59.762 "driver_specific": {} 00:12:59.762 } 00:12:59.762 ] 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:59.762 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.763 "name": "Existed_Raid", 00:12:59.763 "uuid": "dc460c49-631b-4351-9305-6ada77d295a8", 00:12:59.763 "strip_size_kb": 0, 00:12:59.763 "state": "online", 00:12:59.763 "raid_level": "raid1", 00:12:59.763 "superblock": true, 00:12:59.763 "num_base_bdevs": 4, 00:12:59.763 "num_base_bdevs_discovered": 4, 00:12:59.763 "num_base_bdevs_operational": 4, 00:12:59.763 "base_bdevs_list": [ 00:12:59.763 { 00:12:59.763 "name": "BaseBdev1", 00:12:59.763 "uuid": "2e7c5dfe-65ef-404f-8d9a-3d44ffea7df2", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 2048, 00:12:59.763 "data_size": 63488 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "name": "BaseBdev2", 00:12:59.763 "uuid": "99900d17-a42e-4a52-97a0-038b4ee8c4ac", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 2048, 00:12:59.763 "data_size": 63488 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "name": "BaseBdev3", 00:12:59.763 "uuid": "e6927775-1a32-4a04-bbfe-d6995430b2e7", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 2048, 00:12:59.763 "data_size": 63488 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "name": "BaseBdev4", 00:12:59.763 "uuid": "235fa620-dec1-4661-869d-f4253816e7ea", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 2048, 00:12:59.763 "data_size": 63488 00:12:59.763 } 00:12:59.763 ] 00:12:59.763 }' 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.763 11:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.343 [2024-11-15 11:24:43.039583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:00.343 "name": "Existed_Raid", 00:13:00.343 "aliases": [ 00:13:00.343 "dc460c49-631b-4351-9305-6ada77d295a8" 00:13:00.343 ], 00:13:00.343 "product_name": "Raid Volume", 00:13:00.343 "block_size": 512, 00:13:00.343 "num_blocks": 63488, 00:13:00.343 "uuid": "dc460c49-631b-4351-9305-6ada77d295a8", 00:13:00.343 "assigned_rate_limits": { 00:13:00.343 "rw_ios_per_sec": 0, 00:13:00.343 "rw_mbytes_per_sec": 0, 00:13:00.343 "r_mbytes_per_sec": 0, 00:13:00.343 "w_mbytes_per_sec": 0 00:13:00.343 }, 00:13:00.343 "claimed": false, 00:13:00.343 "zoned": false, 00:13:00.343 "supported_io_types": { 00:13:00.343 "read": true, 00:13:00.343 "write": true, 00:13:00.343 "unmap": false, 00:13:00.343 "flush": false, 00:13:00.343 "reset": true, 00:13:00.343 "nvme_admin": false, 00:13:00.343 "nvme_io": false, 00:13:00.343 "nvme_io_md": false, 00:13:00.343 "write_zeroes": true, 00:13:00.343 "zcopy": false, 00:13:00.343 "get_zone_info": false, 00:13:00.343 "zone_management": false, 00:13:00.343 "zone_append": false, 00:13:00.343 "compare": false, 00:13:00.343 "compare_and_write": false, 00:13:00.343 "abort": false, 00:13:00.343 "seek_hole": false, 00:13:00.343 "seek_data": false, 00:13:00.343 "copy": false, 00:13:00.343 "nvme_iov_md": false 00:13:00.343 }, 00:13:00.343 "memory_domains": [ 00:13:00.343 { 00:13:00.343 "dma_device_id": "system", 00:13:00.343 "dma_device_type": 1 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.343 "dma_device_type": 2 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "dma_device_id": "system", 00:13:00.343 "dma_device_type": 1 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.343 "dma_device_type": 2 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "dma_device_id": "system", 00:13:00.343 "dma_device_type": 1 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.343 "dma_device_type": 2 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "dma_device_id": "system", 00:13:00.343 "dma_device_type": 1 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.343 "dma_device_type": 2 00:13:00.343 } 00:13:00.343 ], 00:13:00.343 "driver_specific": { 00:13:00.343 "raid": { 00:13:00.343 "uuid": "dc460c49-631b-4351-9305-6ada77d295a8", 00:13:00.343 "strip_size_kb": 0, 00:13:00.343 "state": "online", 00:13:00.343 "raid_level": "raid1", 00:13:00.343 "superblock": true, 00:13:00.343 "num_base_bdevs": 4, 00:13:00.343 "num_base_bdevs_discovered": 4, 00:13:00.343 "num_base_bdevs_operational": 4, 00:13:00.343 "base_bdevs_list": [ 00:13:00.343 { 00:13:00.343 "name": "BaseBdev1", 00:13:00.343 "uuid": "2e7c5dfe-65ef-404f-8d9a-3d44ffea7df2", 00:13:00.343 "is_configured": true, 00:13:00.343 "data_offset": 2048, 00:13:00.343 "data_size": 63488 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "name": "BaseBdev2", 00:13:00.343 "uuid": "99900d17-a42e-4a52-97a0-038b4ee8c4ac", 00:13:00.343 "is_configured": true, 00:13:00.343 "data_offset": 2048, 00:13:00.343 "data_size": 63488 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "name": "BaseBdev3", 00:13:00.343 "uuid": "e6927775-1a32-4a04-bbfe-d6995430b2e7", 00:13:00.343 "is_configured": true, 00:13:00.343 "data_offset": 2048, 00:13:00.343 "data_size": 63488 00:13:00.343 }, 00:13:00.343 { 00:13:00.343 "name": "BaseBdev4", 00:13:00.343 "uuid": "235fa620-dec1-4661-869d-f4253816e7ea", 00:13:00.343 "is_configured": true, 00:13:00.343 "data_offset": 2048, 00:13:00.343 "data_size": 63488 00:13:00.343 } 00:13:00.343 ] 00:13:00.343 } 00:13:00.343 } 00:13:00.343 }' 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:00.343 BaseBdev2 00:13:00.343 BaseBdev3 00:13:00.343 BaseBdev4' 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.343 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.344 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.344 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:00.344 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.344 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.344 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.615 [2024-11-15 11:24:43.395218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.615 "name": "Existed_Raid", 00:13:00.615 "uuid": "dc460c49-631b-4351-9305-6ada77d295a8", 00:13:00.615 "strip_size_kb": 0, 00:13:00.615 "state": "online", 00:13:00.615 "raid_level": "raid1", 00:13:00.615 "superblock": true, 00:13:00.615 "num_base_bdevs": 4, 00:13:00.615 "num_base_bdevs_discovered": 3, 00:13:00.615 "num_base_bdevs_operational": 3, 00:13:00.615 "base_bdevs_list": [ 00:13:00.615 { 00:13:00.615 "name": null, 00:13:00.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.615 "is_configured": false, 00:13:00.615 "data_offset": 0, 00:13:00.615 "data_size": 63488 00:13:00.615 }, 00:13:00.615 { 00:13:00.615 "name": "BaseBdev2", 00:13:00.615 "uuid": "99900d17-a42e-4a52-97a0-038b4ee8c4ac", 00:13:00.615 "is_configured": true, 00:13:00.615 "data_offset": 2048, 00:13:00.615 "data_size": 63488 00:13:00.615 }, 00:13:00.615 { 00:13:00.615 "name": "BaseBdev3", 00:13:00.615 "uuid": "e6927775-1a32-4a04-bbfe-d6995430b2e7", 00:13:00.615 "is_configured": true, 00:13:00.615 "data_offset": 2048, 00:13:00.615 "data_size": 63488 00:13:00.615 }, 00:13:00.615 { 00:13:00.615 "name": "BaseBdev4", 00:13:00.615 "uuid": "235fa620-dec1-4661-869d-f4253816e7ea", 00:13:00.615 "is_configured": true, 00:13:00.615 "data_offset": 2048, 00:13:00.615 "data_size": 63488 00:13:00.615 } 00:13:00.615 ] 00:13:00.615 }' 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.615 11:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:01.182 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:01.183 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.183 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.183 [2024-11-15 11:24:44.058607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.441 [2024-11-15 11:24:44.199011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.441 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.441 [2024-11-15 11:24:44.338650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:01.441 [2024-11-15 11:24:44.338952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.700 [2024-11-15 11:24:44.419065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.700 [2024-11-15 11:24:44.419366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.700 [2024-11-15 11:24:44.419513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 BaseBdev2 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 [ 00:13:01.700 { 00:13:01.700 "name": "BaseBdev2", 00:13:01.700 "aliases": [ 00:13:01.700 "6c1567e1-89f2-474e-8c3c-193f13c8fba2" 00:13:01.700 ], 00:13:01.700 "product_name": "Malloc disk", 00:13:01.700 "block_size": 512, 00:13:01.700 "num_blocks": 65536, 00:13:01.700 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:01.700 "assigned_rate_limits": { 00:13:01.700 "rw_ios_per_sec": 0, 00:13:01.700 "rw_mbytes_per_sec": 0, 00:13:01.700 "r_mbytes_per_sec": 0, 00:13:01.700 "w_mbytes_per_sec": 0 00:13:01.700 }, 00:13:01.700 "claimed": false, 00:13:01.700 "zoned": false, 00:13:01.700 "supported_io_types": { 00:13:01.700 "read": true, 00:13:01.700 "write": true, 00:13:01.700 "unmap": true, 00:13:01.700 "flush": true, 00:13:01.700 "reset": true, 00:13:01.700 "nvme_admin": false, 00:13:01.700 "nvme_io": false, 00:13:01.700 "nvme_io_md": false, 00:13:01.700 "write_zeroes": true, 00:13:01.700 "zcopy": true, 00:13:01.700 "get_zone_info": false, 00:13:01.700 "zone_management": false, 00:13:01.700 "zone_append": false, 00:13:01.700 "compare": false, 00:13:01.700 "compare_and_write": false, 00:13:01.700 "abort": true, 00:13:01.700 "seek_hole": false, 00:13:01.700 "seek_data": false, 00:13:01.700 "copy": true, 00:13:01.700 "nvme_iov_md": false 00:13:01.700 }, 00:13:01.700 "memory_domains": [ 00:13:01.700 { 00:13:01.700 "dma_device_id": "system", 00:13:01.700 "dma_device_type": 1 00:13:01.700 }, 00:13:01.700 { 00:13:01.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.700 "dma_device_type": 2 00:13:01.700 } 00:13:01.700 ], 00:13:01.700 "driver_specific": {} 00:13:01.700 } 00:13:01.700 ] 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.700 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.701 BaseBdev3 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.701 [ 00:13:01.701 { 00:13:01.701 "name": "BaseBdev3", 00:13:01.701 "aliases": [ 00:13:01.701 "bddeb86e-d794-48a9-93e9-a2e5977091e3" 00:13:01.701 ], 00:13:01.701 "product_name": "Malloc disk", 00:13:01.701 "block_size": 512, 00:13:01.701 "num_blocks": 65536, 00:13:01.701 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:01.701 "assigned_rate_limits": { 00:13:01.701 "rw_ios_per_sec": 0, 00:13:01.701 "rw_mbytes_per_sec": 0, 00:13:01.701 "r_mbytes_per_sec": 0, 00:13:01.701 "w_mbytes_per_sec": 0 00:13:01.701 }, 00:13:01.701 "claimed": false, 00:13:01.701 "zoned": false, 00:13:01.701 "supported_io_types": { 00:13:01.701 "read": true, 00:13:01.701 "write": true, 00:13:01.701 "unmap": true, 00:13:01.701 "flush": true, 00:13:01.701 "reset": true, 00:13:01.701 "nvme_admin": false, 00:13:01.701 "nvme_io": false, 00:13:01.701 "nvme_io_md": false, 00:13:01.701 "write_zeroes": true, 00:13:01.701 "zcopy": true, 00:13:01.701 "get_zone_info": false, 00:13:01.701 "zone_management": false, 00:13:01.701 "zone_append": false, 00:13:01.701 "compare": false, 00:13:01.701 "compare_and_write": false, 00:13:01.701 "abort": true, 00:13:01.701 "seek_hole": false, 00:13:01.701 "seek_data": false, 00:13:01.701 "copy": true, 00:13:01.701 "nvme_iov_md": false 00:13:01.701 }, 00:13:01.701 "memory_domains": [ 00:13:01.701 { 00:13:01.701 "dma_device_id": "system", 00:13:01.701 "dma_device_type": 1 00:13:01.701 }, 00:13:01.701 { 00:13:01.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.701 "dma_device_type": 2 00:13:01.701 } 00:13:01.701 ], 00:13:01.701 "driver_specific": {} 00:13:01.701 } 00:13:01.701 ] 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.701 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.959 BaseBdev4 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.959 [ 00:13:01.959 { 00:13:01.959 "name": "BaseBdev4", 00:13:01.959 "aliases": [ 00:13:01.959 "f42c3fc9-9778-4826-b62c-68d30ad85345" 00:13:01.959 ], 00:13:01.959 "product_name": "Malloc disk", 00:13:01.959 "block_size": 512, 00:13:01.959 "num_blocks": 65536, 00:13:01.959 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:01.959 "assigned_rate_limits": { 00:13:01.959 "rw_ios_per_sec": 0, 00:13:01.959 "rw_mbytes_per_sec": 0, 00:13:01.959 "r_mbytes_per_sec": 0, 00:13:01.959 "w_mbytes_per_sec": 0 00:13:01.959 }, 00:13:01.959 "claimed": false, 00:13:01.959 "zoned": false, 00:13:01.959 "supported_io_types": { 00:13:01.959 "read": true, 00:13:01.959 "write": true, 00:13:01.959 "unmap": true, 00:13:01.959 "flush": true, 00:13:01.959 "reset": true, 00:13:01.959 "nvme_admin": false, 00:13:01.959 "nvme_io": false, 00:13:01.959 "nvme_io_md": false, 00:13:01.959 "write_zeroes": true, 00:13:01.959 "zcopy": true, 00:13:01.959 "get_zone_info": false, 00:13:01.959 "zone_management": false, 00:13:01.959 "zone_append": false, 00:13:01.959 "compare": false, 00:13:01.959 "compare_and_write": false, 00:13:01.959 "abort": true, 00:13:01.959 "seek_hole": false, 00:13:01.959 "seek_data": false, 00:13:01.959 "copy": true, 00:13:01.959 "nvme_iov_md": false 00:13:01.959 }, 00:13:01.959 "memory_domains": [ 00:13:01.959 { 00:13:01.959 "dma_device_id": "system", 00:13:01.959 "dma_device_type": 1 00:13:01.959 }, 00:13:01.959 { 00:13:01.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.959 "dma_device_type": 2 00:13:01.959 } 00:13:01.959 ], 00:13:01.959 "driver_specific": {} 00:13:01.959 } 00:13:01.959 ] 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.959 [2024-11-15 11:24:44.708990] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.959 [2024-11-15 11:24:44.709065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.959 [2024-11-15 11:24:44.709093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.959 [2024-11-15 11:24:44.711731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.959 [2024-11-15 11:24:44.711792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.959 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.959 "name": "Existed_Raid", 00:13:01.959 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:01.959 "strip_size_kb": 0, 00:13:01.959 "state": "configuring", 00:13:01.959 "raid_level": "raid1", 00:13:01.959 "superblock": true, 00:13:01.959 "num_base_bdevs": 4, 00:13:01.959 "num_base_bdevs_discovered": 3, 00:13:01.959 "num_base_bdevs_operational": 4, 00:13:01.959 "base_bdevs_list": [ 00:13:01.959 { 00:13:01.959 "name": "BaseBdev1", 00:13:01.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.959 "is_configured": false, 00:13:01.959 "data_offset": 0, 00:13:01.959 "data_size": 0 00:13:01.959 }, 00:13:01.959 { 00:13:01.959 "name": "BaseBdev2", 00:13:01.959 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:01.959 "is_configured": true, 00:13:01.959 "data_offset": 2048, 00:13:01.959 "data_size": 63488 00:13:01.959 }, 00:13:01.959 { 00:13:01.959 "name": "BaseBdev3", 00:13:01.960 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:01.960 "is_configured": true, 00:13:01.960 "data_offset": 2048, 00:13:01.960 "data_size": 63488 00:13:01.960 }, 00:13:01.960 { 00:13:01.960 "name": "BaseBdev4", 00:13:01.960 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:01.960 "is_configured": true, 00:13:01.960 "data_offset": 2048, 00:13:01.960 "data_size": 63488 00:13:01.960 } 00:13:01.960 ] 00:13:01.960 }' 00:13:01.960 11:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.960 11:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.526 [2024-11-15 11:24:45.229248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.526 "name": "Existed_Raid", 00:13:02.526 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:02.526 "strip_size_kb": 0, 00:13:02.526 "state": "configuring", 00:13:02.526 "raid_level": "raid1", 00:13:02.526 "superblock": true, 00:13:02.526 "num_base_bdevs": 4, 00:13:02.526 "num_base_bdevs_discovered": 2, 00:13:02.526 "num_base_bdevs_operational": 4, 00:13:02.526 "base_bdevs_list": [ 00:13:02.526 { 00:13:02.526 "name": "BaseBdev1", 00:13:02.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.526 "is_configured": false, 00:13:02.526 "data_offset": 0, 00:13:02.526 "data_size": 0 00:13:02.526 }, 00:13:02.526 { 00:13:02.526 "name": null, 00:13:02.526 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:02.526 "is_configured": false, 00:13:02.526 "data_offset": 0, 00:13:02.526 "data_size": 63488 00:13:02.526 }, 00:13:02.526 { 00:13:02.526 "name": "BaseBdev3", 00:13:02.526 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:02.526 "is_configured": true, 00:13:02.526 "data_offset": 2048, 00:13:02.526 "data_size": 63488 00:13:02.526 }, 00:13:02.526 { 00:13:02.526 "name": "BaseBdev4", 00:13:02.526 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:02.526 "is_configured": true, 00:13:02.526 "data_offset": 2048, 00:13:02.526 "data_size": 63488 00:13:02.526 } 00:13:02.526 ] 00:13:02.526 }' 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.526 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.094 [2024-11-15 11:24:45.835695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.094 BaseBdev1 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.094 [ 00:13:03.094 { 00:13:03.094 "name": "BaseBdev1", 00:13:03.094 "aliases": [ 00:13:03.094 "2b72bb10-e830-4d28-90bd-5e87cf79dfec" 00:13:03.094 ], 00:13:03.094 "product_name": "Malloc disk", 00:13:03.094 "block_size": 512, 00:13:03.094 "num_blocks": 65536, 00:13:03.094 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:03.094 "assigned_rate_limits": { 00:13:03.094 "rw_ios_per_sec": 0, 00:13:03.094 "rw_mbytes_per_sec": 0, 00:13:03.094 "r_mbytes_per_sec": 0, 00:13:03.094 "w_mbytes_per_sec": 0 00:13:03.094 }, 00:13:03.094 "claimed": true, 00:13:03.094 "claim_type": "exclusive_write", 00:13:03.094 "zoned": false, 00:13:03.094 "supported_io_types": { 00:13:03.094 "read": true, 00:13:03.094 "write": true, 00:13:03.094 "unmap": true, 00:13:03.094 "flush": true, 00:13:03.094 "reset": true, 00:13:03.094 "nvme_admin": false, 00:13:03.094 "nvme_io": false, 00:13:03.094 "nvme_io_md": false, 00:13:03.094 "write_zeroes": true, 00:13:03.094 "zcopy": true, 00:13:03.094 "get_zone_info": false, 00:13:03.094 "zone_management": false, 00:13:03.094 "zone_append": false, 00:13:03.094 "compare": false, 00:13:03.094 "compare_and_write": false, 00:13:03.094 "abort": true, 00:13:03.094 "seek_hole": false, 00:13:03.094 "seek_data": false, 00:13:03.094 "copy": true, 00:13:03.094 "nvme_iov_md": false 00:13:03.094 }, 00:13:03.094 "memory_domains": [ 00:13:03.094 { 00:13:03.094 "dma_device_id": "system", 00:13:03.094 "dma_device_type": 1 00:13:03.094 }, 00:13:03.094 { 00:13:03.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.094 "dma_device_type": 2 00:13:03.094 } 00:13:03.094 ], 00:13:03.094 "driver_specific": {} 00:13:03.094 } 00:13:03.094 ] 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.094 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.094 "name": "Existed_Raid", 00:13:03.094 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:03.094 "strip_size_kb": 0, 00:13:03.094 "state": "configuring", 00:13:03.094 "raid_level": "raid1", 00:13:03.094 "superblock": true, 00:13:03.094 "num_base_bdevs": 4, 00:13:03.094 "num_base_bdevs_discovered": 3, 00:13:03.094 "num_base_bdevs_operational": 4, 00:13:03.094 "base_bdevs_list": [ 00:13:03.094 { 00:13:03.094 "name": "BaseBdev1", 00:13:03.094 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:03.094 "is_configured": true, 00:13:03.094 "data_offset": 2048, 00:13:03.094 "data_size": 63488 00:13:03.094 }, 00:13:03.094 { 00:13:03.094 "name": null, 00:13:03.094 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:03.094 "is_configured": false, 00:13:03.094 "data_offset": 0, 00:13:03.094 "data_size": 63488 00:13:03.094 }, 00:13:03.094 { 00:13:03.094 "name": "BaseBdev3", 00:13:03.094 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:03.094 "is_configured": true, 00:13:03.094 "data_offset": 2048, 00:13:03.094 "data_size": 63488 00:13:03.094 }, 00:13:03.094 { 00:13:03.094 "name": "BaseBdev4", 00:13:03.095 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:03.095 "is_configured": true, 00:13:03.095 "data_offset": 2048, 00:13:03.095 "data_size": 63488 00:13:03.095 } 00:13:03.095 ] 00:13:03.095 }' 00:13:03.095 11:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.095 11:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.661 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.662 [2024-11-15 11:24:46.431939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.662 "name": "Existed_Raid", 00:13:03.662 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:03.662 "strip_size_kb": 0, 00:13:03.662 "state": "configuring", 00:13:03.662 "raid_level": "raid1", 00:13:03.662 "superblock": true, 00:13:03.662 "num_base_bdevs": 4, 00:13:03.662 "num_base_bdevs_discovered": 2, 00:13:03.662 "num_base_bdevs_operational": 4, 00:13:03.662 "base_bdevs_list": [ 00:13:03.662 { 00:13:03.662 "name": "BaseBdev1", 00:13:03.662 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:03.662 "is_configured": true, 00:13:03.662 "data_offset": 2048, 00:13:03.662 "data_size": 63488 00:13:03.662 }, 00:13:03.662 { 00:13:03.662 "name": null, 00:13:03.662 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:03.662 "is_configured": false, 00:13:03.662 "data_offset": 0, 00:13:03.662 "data_size": 63488 00:13:03.662 }, 00:13:03.662 { 00:13:03.662 "name": null, 00:13:03.662 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:03.662 "is_configured": false, 00:13:03.662 "data_offset": 0, 00:13:03.662 "data_size": 63488 00:13:03.662 }, 00:13:03.662 { 00:13:03.662 "name": "BaseBdev4", 00:13:03.662 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:03.662 "is_configured": true, 00:13:03.662 "data_offset": 2048, 00:13:03.662 "data_size": 63488 00:13:03.662 } 00:13:03.662 ] 00:13:03.662 }' 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.662 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.230 [2024-11-15 11:24:46.996069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.230 11:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.230 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.230 "name": "Existed_Raid", 00:13:04.230 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:04.230 "strip_size_kb": 0, 00:13:04.230 "state": "configuring", 00:13:04.230 "raid_level": "raid1", 00:13:04.230 "superblock": true, 00:13:04.230 "num_base_bdevs": 4, 00:13:04.231 "num_base_bdevs_discovered": 3, 00:13:04.231 "num_base_bdevs_operational": 4, 00:13:04.231 "base_bdevs_list": [ 00:13:04.231 { 00:13:04.231 "name": "BaseBdev1", 00:13:04.231 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:04.231 "is_configured": true, 00:13:04.231 "data_offset": 2048, 00:13:04.231 "data_size": 63488 00:13:04.231 }, 00:13:04.231 { 00:13:04.231 "name": null, 00:13:04.231 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:04.231 "is_configured": false, 00:13:04.231 "data_offset": 0, 00:13:04.231 "data_size": 63488 00:13:04.231 }, 00:13:04.231 { 00:13:04.231 "name": "BaseBdev3", 00:13:04.231 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:04.231 "is_configured": true, 00:13:04.231 "data_offset": 2048, 00:13:04.231 "data_size": 63488 00:13:04.231 }, 00:13:04.231 { 00:13:04.231 "name": "BaseBdev4", 00:13:04.231 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:04.231 "is_configured": true, 00:13:04.231 "data_offset": 2048, 00:13:04.231 "data_size": 63488 00:13:04.231 } 00:13:04.231 ] 00:13:04.231 }' 00:13:04.231 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.231 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.798 [2024-11-15 11:24:47.572349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.798 "name": "Existed_Raid", 00:13:04.798 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:04.798 "strip_size_kb": 0, 00:13:04.798 "state": "configuring", 00:13:04.798 "raid_level": "raid1", 00:13:04.798 "superblock": true, 00:13:04.798 "num_base_bdevs": 4, 00:13:04.798 "num_base_bdevs_discovered": 2, 00:13:04.798 "num_base_bdevs_operational": 4, 00:13:04.798 "base_bdevs_list": [ 00:13:04.798 { 00:13:04.798 "name": null, 00:13:04.798 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:04.798 "is_configured": false, 00:13:04.798 "data_offset": 0, 00:13:04.798 "data_size": 63488 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "name": null, 00:13:04.798 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:04.798 "is_configured": false, 00:13:04.798 "data_offset": 0, 00:13:04.798 "data_size": 63488 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "name": "BaseBdev3", 00:13:04.798 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:04.798 "is_configured": true, 00:13:04.798 "data_offset": 2048, 00:13:04.798 "data_size": 63488 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "name": "BaseBdev4", 00:13:04.798 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:04.798 "is_configured": true, 00:13:04.798 "data_offset": 2048, 00:13:04.798 "data_size": 63488 00:13:04.798 } 00:13:04.798 ] 00:13:04.798 }' 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.798 11:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 [2024-11-15 11:24:48.251845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.365 "name": "Existed_Raid", 00:13:05.365 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:05.365 "strip_size_kb": 0, 00:13:05.365 "state": "configuring", 00:13:05.365 "raid_level": "raid1", 00:13:05.365 "superblock": true, 00:13:05.365 "num_base_bdevs": 4, 00:13:05.365 "num_base_bdevs_discovered": 3, 00:13:05.365 "num_base_bdevs_operational": 4, 00:13:05.365 "base_bdevs_list": [ 00:13:05.365 { 00:13:05.365 "name": null, 00:13:05.365 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:05.365 "is_configured": false, 00:13:05.365 "data_offset": 0, 00:13:05.365 "data_size": 63488 00:13:05.365 }, 00:13:05.365 { 00:13:05.365 "name": "BaseBdev2", 00:13:05.365 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:05.365 "is_configured": true, 00:13:05.365 "data_offset": 2048, 00:13:05.365 "data_size": 63488 00:13:05.365 }, 00:13:05.365 { 00:13:05.365 "name": "BaseBdev3", 00:13:05.365 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:05.365 "is_configured": true, 00:13:05.365 "data_offset": 2048, 00:13:05.365 "data_size": 63488 00:13:05.365 }, 00:13:05.365 { 00:13:05.365 "name": "BaseBdev4", 00:13:05.365 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:05.365 "is_configured": true, 00:13:05.365 "data_offset": 2048, 00:13:05.365 "data_size": 63488 00:13:05.365 } 00:13:05.365 ] 00:13:05.365 }' 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.365 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.932 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.190 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2b72bb10-e830-4d28-90bd-5e87cf79dfec 00:13:06.190 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.190 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.190 [2024-11-15 11:24:48.928558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:06.190 [2024-11-15 11:24:48.928909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:06.190 [2024-11-15 11:24:48.928965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.190 [2024-11-15 11:24:48.929332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:06.190 NewBaseBdev 00:13:06.190 [2024-11-15 11:24:48.929597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:06.190 [2024-11-15 11:24:48.929615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:06.190 [2024-11-15 11:24:48.929788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.190 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.190 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:06.190 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:06.190 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:06.190 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.191 [ 00:13:06.191 { 00:13:06.191 "name": "NewBaseBdev", 00:13:06.191 "aliases": [ 00:13:06.191 "2b72bb10-e830-4d28-90bd-5e87cf79dfec" 00:13:06.191 ], 00:13:06.191 "product_name": "Malloc disk", 00:13:06.191 "block_size": 512, 00:13:06.191 "num_blocks": 65536, 00:13:06.191 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:06.191 "assigned_rate_limits": { 00:13:06.191 "rw_ios_per_sec": 0, 00:13:06.191 "rw_mbytes_per_sec": 0, 00:13:06.191 "r_mbytes_per_sec": 0, 00:13:06.191 "w_mbytes_per_sec": 0 00:13:06.191 }, 00:13:06.191 "claimed": true, 00:13:06.191 "claim_type": "exclusive_write", 00:13:06.191 "zoned": false, 00:13:06.191 "supported_io_types": { 00:13:06.191 "read": true, 00:13:06.191 "write": true, 00:13:06.191 "unmap": true, 00:13:06.191 "flush": true, 00:13:06.191 "reset": true, 00:13:06.191 "nvme_admin": false, 00:13:06.191 "nvme_io": false, 00:13:06.191 "nvme_io_md": false, 00:13:06.191 "write_zeroes": true, 00:13:06.191 "zcopy": true, 00:13:06.191 "get_zone_info": false, 00:13:06.191 "zone_management": false, 00:13:06.191 "zone_append": false, 00:13:06.191 "compare": false, 00:13:06.191 "compare_and_write": false, 00:13:06.191 "abort": true, 00:13:06.191 "seek_hole": false, 00:13:06.191 "seek_data": false, 00:13:06.191 "copy": true, 00:13:06.191 "nvme_iov_md": false 00:13:06.191 }, 00:13:06.191 "memory_domains": [ 00:13:06.191 { 00:13:06.191 "dma_device_id": "system", 00:13:06.191 "dma_device_type": 1 00:13:06.191 }, 00:13:06.191 { 00:13:06.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.191 "dma_device_type": 2 00:13:06.191 } 00:13:06.191 ], 00:13:06.191 "driver_specific": {} 00:13:06.191 } 00:13:06.191 ] 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.191 11:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.191 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.191 "name": "Existed_Raid", 00:13:06.191 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:06.191 "strip_size_kb": 0, 00:13:06.191 "state": "online", 00:13:06.191 "raid_level": "raid1", 00:13:06.191 "superblock": true, 00:13:06.191 "num_base_bdevs": 4, 00:13:06.191 "num_base_bdevs_discovered": 4, 00:13:06.191 "num_base_bdevs_operational": 4, 00:13:06.191 "base_bdevs_list": [ 00:13:06.191 { 00:13:06.191 "name": "NewBaseBdev", 00:13:06.191 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:06.191 "is_configured": true, 00:13:06.191 "data_offset": 2048, 00:13:06.191 "data_size": 63488 00:13:06.191 }, 00:13:06.191 { 00:13:06.191 "name": "BaseBdev2", 00:13:06.191 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:06.191 "is_configured": true, 00:13:06.191 "data_offset": 2048, 00:13:06.191 "data_size": 63488 00:13:06.191 }, 00:13:06.191 { 00:13:06.191 "name": "BaseBdev3", 00:13:06.191 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:06.191 "is_configured": true, 00:13:06.191 "data_offset": 2048, 00:13:06.191 "data_size": 63488 00:13:06.191 }, 00:13:06.191 { 00:13:06.191 "name": "BaseBdev4", 00:13:06.191 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:06.191 "is_configured": true, 00:13:06.191 "data_offset": 2048, 00:13:06.191 "data_size": 63488 00:13:06.191 } 00:13:06.191 ] 00:13:06.191 }' 00:13:06.191 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.191 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.757 [2024-11-15 11:24:49.493362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.757 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:06.757 "name": "Existed_Raid", 00:13:06.757 "aliases": [ 00:13:06.757 "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746" 00:13:06.757 ], 00:13:06.757 "product_name": "Raid Volume", 00:13:06.757 "block_size": 512, 00:13:06.757 "num_blocks": 63488, 00:13:06.757 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:06.757 "assigned_rate_limits": { 00:13:06.757 "rw_ios_per_sec": 0, 00:13:06.757 "rw_mbytes_per_sec": 0, 00:13:06.757 "r_mbytes_per_sec": 0, 00:13:06.757 "w_mbytes_per_sec": 0 00:13:06.757 }, 00:13:06.757 "claimed": false, 00:13:06.757 "zoned": false, 00:13:06.757 "supported_io_types": { 00:13:06.757 "read": true, 00:13:06.757 "write": true, 00:13:06.757 "unmap": false, 00:13:06.757 "flush": false, 00:13:06.757 "reset": true, 00:13:06.757 "nvme_admin": false, 00:13:06.757 "nvme_io": false, 00:13:06.757 "nvme_io_md": false, 00:13:06.757 "write_zeroes": true, 00:13:06.757 "zcopy": false, 00:13:06.757 "get_zone_info": false, 00:13:06.757 "zone_management": false, 00:13:06.757 "zone_append": false, 00:13:06.757 "compare": false, 00:13:06.757 "compare_and_write": false, 00:13:06.757 "abort": false, 00:13:06.757 "seek_hole": false, 00:13:06.757 "seek_data": false, 00:13:06.757 "copy": false, 00:13:06.757 "nvme_iov_md": false 00:13:06.757 }, 00:13:06.757 "memory_domains": [ 00:13:06.757 { 00:13:06.757 "dma_device_id": "system", 00:13:06.757 "dma_device_type": 1 00:13:06.757 }, 00:13:06.757 { 00:13:06.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.757 "dma_device_type": 2 00:13:06.757 }, 00:13:06.757 { 00:13:06.757 "dma_device_id": "system", 00:13:06.757 "dma_device_type": 1 00:13:06.757 }, 00:13:06.757 { 00:13:06.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.757 "dma_device_type": 2 00:13:06.757 }, 00:13:06.757 { 00:13:06.757 "dma_device_id": "system", 00:13:06.757 "dma_device_type": 1 00:13:06.757 }, 00:13:06.757 { 00:13:06.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.757 "dma_device_type": 2 00:13:06.757 }, 00:13:06.757 { 00:13:06.757 "dma_device_id": "system", 00:13:06.757 "dma_device_type": 1 00:13:06.757 }, 00:13:06.757 { 00:13:06.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.757 "dma_device_type": 2 00:13:06.757 } 00:13:06.757 ], 00:13:06.757 "driver_specific": { 00:13:06.757 "raid": { 00:13:06.757 "uuid": "1465b4c3-d4b9-49e4-ab3a-acc28e1dc746", 00:13:06.757 "strip_size_kb": 0, 00:13:06.757 "state": "online", 00:13:06.758 "raid_level": "raid1", 00:13:06.758 "superblock": true, 00:13:06.758 "num_base_bdevs": 4, 00:13:06.758 "num_base_bdevs_discovered": 4, 00:13:06.758 "num_base_bdevs_operational": 4, 00:13:06.758 "base_bdevs_list": [ 00:13:06.758 { 00:13:06.758 "name": "NewBaseBdev", 00:13:06.758 "uuid": "2b72bb10-e830-4d28-90bd-5e87cf79dfec", 00:13:06.758 "is_configured": true, 00:13:06.758 "data_offset": 2048, 00:13:06.758 "data_size": 63488 00:13:06.758 }, 00:13:06.758 { 00:13:06.758 "name": "BaseBdev2", 00:13:06.758 "uuid": "6c1567e1-89f2-474e-8c3c-193f13c8fba2", 00:13:06.758 "is_configured": true, 00:13:06.758 "data_offset": 2048, 00:13:06.758 "data_size": 63488 00:13:06.758 }, 00:13:06.758 { 00:13:06.758 "name": "BaseBdev3", 00:13:06.758 "uuid": "bddeb86e-d794-48a9-93e9-a2e5977091e3", 00:13:06.758 "is_configured": true, 00:13:06.758 "data_offset": 2048, 00:13:06.758 "data_size": 63488 00:13:06.758 }, 00:13:06.758 { 00:13:06.758 "name": "BaseBdev4", 00:13:06.758 "uuid": "f42c3fc9-9778-4826-b62c-68d30ad85345", 00:13:06.758 "is_configured": true, 00:13:06.758 "data_offset": 2048, 00:13:06.758 "data_size": 63488 00:13:06.758 } 00:13:06.758 ] 00:13:06.758 } 00:13:06.758 } 00:13:06.758 }' 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:06.758 BaseBdev2 00:13:06.758 BaseBdev3 00:13:06.758 BaseBdev4' 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.758 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.017 [2024-11-15 11:24:49.888975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.017 [2024-11-15 11:24:49.889032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.017 [2024-11-15 11:24:49.889132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.017 [2024-11-15 11:24:49.889624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.017 [2024-11-15 11:24:49.889704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73876 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73876 ']' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 73876 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73876 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:07.017 killing process with pid 73876 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73876' 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 73876 00:13:07.017 [2024-11-15 11:24:49.926802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.017 11:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 73876 00:13:07.584 [2024-11-15 11:24:50.275118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.521 11:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:08.521 00:13:08.521 real 0m12.809s 00:13:08.521 user 0m21.246s 00:13:08.521 sys 0m1.832s 00:13:08.521 11:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:08.521 11:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.521 ************************************ 00:13:08.521 END TEST raid_state_function_test_sb 00:13:08.521 ************************************ 00:13:08.521 11:24:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:08.521 11:24:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:08.521 11:24:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.521 11:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.521 ************************************ 00:13:08.521 START TEST raid_superblock_test 00:13:08.521 ************************************ 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74562 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74562 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74562 ']' 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:08.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:08.521 11:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 [2024-11-15 11:24:51.526375] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:13:08.780 [2024-11-15 11:24:51.526576] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74562 ] 00:13:08.780 [2024-11-15 11:24:51.711530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.038 [2024-11-15 11:24:51.871179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.298 [2024-11-15 11:24:52.122099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.298 [2024-11-15 11:24:52.122148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.557 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.816 malloc1 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.816 [2024-11-15 11:24:52.547173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:09.816 [2024-11-15 11:24:52.547310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.816 [2024-11-15 11:24:52.547347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.816 [2024-11-15 11:24:52.547371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.816 [2024-11-15 11:24:52.550331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.816 [2024-11-15 11:24:52.550406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:09.816 pt1 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.816 malloc2 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.816 [2024-11-15 11:24:52.605054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:09.816 [2024-11-15 11:24:52.605150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.816 [2024-11-15 11:24:52.605218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.816 [2024-11-15 11:24:52.605236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.816 [2024-11-15 11:24:52.608144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.816 [2024-11-15 11:24:52.608233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:09.816 pt2 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.816 malloc3 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.816 [2024-11-15 11:24:52.678253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:09.816 [2024-11-15 11:24:52.678333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.816 [2024-11-15 11:24:52.678371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:09.816 [2024-11-15 11:24:52.678387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.816 [2024-11-15 11:24:52.681258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.816 [2024-11-15 11:24:52.681318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:09.816 pt3 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:09.816 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.817 malloc4 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.817 [2024-11-15 11:24:52.736835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:09.817 [2024-11-15 11:24:52.736937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.817 [2024-11-15 11:24:52.736971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:09.817 [2024-11-15 11:24:52.736986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.817 [2024-11-15 11:24:52.739926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.817 [2024-11-15 11:24:52.739984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:09.817 pt4 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.817 [2024-11-15 11:24:52.748881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:09.817 [2024-11-15 11:24:52.751482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:09.817 [2024-11-15 11:24:52.751610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:09.817 [2024-11-15 11:24:52.751751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:09.817 [2024-11-15 11:24:52.752016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:09.817 [2024-11-15 11:24:52.752050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.817 [2024-11-15 11:24:52.752414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:09.817 [2024-11-15 11:24:52.752684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:09.817 [2024-11-15 11:24:52.752723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:09.817 [2024-11-15 11:24:52.752960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.817 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.075 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.075 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.075 "name": "raid_bdev1", 00:13:10.075 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:10.075 "strip_size_kb": 0, 00:13:10.075 "state": "online", 00:13:10.075 "raid_level": "raid1", 00:13:10.075 "superblock": true, 00:13:10.075 "num_base_bdevs": 4, 00:13:10.075 "num_base_bdevs_discovered": 4, 00:13:10.075 "num_base_bdevs_operational": 4, 00:13:10.075 "base_bdevs_list": [ 00:13:10.075 { 00:13:10.075 "name": "pt1", 00:13:10.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.075 "is_configured": true, 00:13:10.075 "data_offset": 2048, 00:13:10.075 "data_size": 63488 00:13:10.075 }, 00:13:10.075 { 00:13:10.075 "name": "pt2", 00:13:10.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.075 "is_configured": true, 00:13:10.075 "data_offset": 2048, 00:13:10.075 "data_size": 63488 00:13:10.075 }, 00:13:10.075 { 00:13:10.075 "name": "pt3", 00:13:10.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.075 "is_configured": true, 00:13:10.075 "data_offset": 2048, 00:13:10.075 "data_size": 63488 00:13:10.075 }, 00:13:10.075 { 00:13:10.075 "name": "pt4", 00:13:10.075 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.075 "is_configured": true, 00:13:10.075 "data_offset": 2048, 00:13:10.075 "data_size": 63488 00:13:10.075 } 00:13:10.075 ] 00:13:10.075 }' 00:13:10.075 11:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.075 11:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.334 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.334 [2024-11-15 11:24:53.281587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.591 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.591 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:10.591 "name": "raid_bdev1", 00:13:10.591 "aliases": [ 00:13:10.591 "7ae3543b-925c-4c19-9b7c-304b208717fc" 00:13:10.591 ], 00:13:10.591 "product_name": "Raid Volume", 00:13:10.591 "block_size": 512, 00:13:10.591 "num_blocks": 63488, 00:13:10.591 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:10.591 "assigned_rate_limits": { 00:13:10.591 "rw_ios_per_sec": 0, 00:13:10.591 "rw_mbytes_per_sec": 0, 00:13:10.591 "r_mbytes_per_sec": 0, 00:13:10.591 "w_mbytes_per_sec": 0 00:13:10.591 }, 00:13:10.591 "claimed": false, 00:13:10.591 "zoned": false, 00:13:10.591 "supported_io_types": { 00:13:10.591 "read": true, 00:13:10.591 "write": true, 00:13:10.591 "unmap": false, 00:13:10.591 "flush": false, 00:13:10.591 "reset": true, 00:13:10.591 "nvme_admin": false, 00:13:10.591 "nvme_io": false, 00:13:10.591 "nvme_io_md": false, 00:13:10.592 "write_zeroes": true, 00:13:10.592 "zcopy": false, 00:13:10.592 "get_zone_info": false, 00:13:10.592 "zone_management": false, 00:13:10.592 "zone_append": false, 00:13:10.592 "compare": false, 00:13:10.592 "compare_and_write": false, 00:13:10.592 "abort": false, 00:13:10.592 "seek_hole": false, 00:13:10.592 "seek_data": false, 00:13:10.592 "copy": false, 00:13:10.592 "nvme_iov_md": false 00:13:10.592 }, 00:13:10.592 "memory_domains": [ 00:13:10.592 { 00:13:10.592 "dma_device_id": "system", 00:13:10.592 "dma_device_type": 1 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.592 "dma_device_type": 2 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "dma_device_id": "system", 00:13:10.592 "dma_device_type": 1 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.592 "dma_device_type": 2 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "dma_device_id": "system", 00:13:10.592 "dma_device_type": 1 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.592 "dma_device_type": 2 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "dma_device_id": "system", 00:13:10.592 "dma_device_type": 1 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.592 "dma_device_type": 2 00:13:10.592 } 00:13:10.592 ], 00:13:10.592 "driver_specific": { 00:13:10.592 "raid": { 00:13:10.592 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:10.592 "strip_size_kb": 0, 00:13:10.592 "state": "online", 00:13:10.592 "raid_level": "raid1", 00:13:10.592 "superblock": true, 00:13:10.592 "num_base_bdevs": 4, 00:13:10.592 "num_base_bdevs_discovered": 4, 00:13:10.592 "num_base_bdevs_operational": 4, 00:13:10.592 "base_bdevs_list": [ 00:13:10.592 { 00:13:10.592 "name": "pt1", 00:13:10.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.592 "is_configured": true, 00:13:10.592 "data_offset": 2048, 00:13:10.592 "data_size": 63488 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "name": "pt2", 00:13:10.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.592 "is_configured": true, 00:13:10.592 "data_offset": 2048, 00:13:10.592 "data_size": 63488 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "name": "pt3", 00:13:10.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.592 "is_configured": true, 00:13:10.592 "data_offset": 2048, 00:13:10.592 "data_size": 63488 00:13:10.592 }, 00:13:10.592 { 00:13:10.592 "name": "pt4", 00:13:10.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.592 "is_configured": true, 00:13:10.592 "data_offset": 2048, 00:13:10.592 "data_size": 63488 00:13:10.592 } 00:13:10.592 ] 00:13:10.592 } 00:13:10.592 } 00:13:10.592 }' 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:10.592 pt2 00:13:10.592 pt3 00:13:10.592 pt4' 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.592 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 [2024-11-15 11:24:53.657543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7ae3543b-925c-4c19-9b7c-304b208717fc 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7ae3543b-925c-4c19-9b7c-304b208717fc ']' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 [2024-11-15 11:24:53.701215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.851 [2024-11-15 11:24:53.701260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.851 [2024-11-15 11:24:53.701374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.851 [2024-11-15 11:24:53.701491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.851 [2024-11-15 11:24:53.701516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.110 [2024-11-15 11:24:53.861297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:11.110 [2024-11-15 11:24:53.864043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:11.110 [2024-11-15 11:24:53.864135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:11.110 [2024-11-15 11:24:53.864238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:11.110 [2024-11-15 11:24:53.864318] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:11.110 [2024-11-15 11:24:53.864395] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:11.110 [2024-11-15 11:24:53.864429] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:11.110 [2024-11-15 11:24:53.864461] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:11.110 [2024-11-15 11:24:53.864484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.110 [2024-11-15 11:24:53.864501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:11.110 request: 00:13:11.110 { 00:13:11.110 "name": "raid_bdev1", 00:13:11.110 "raid_level": "raid1", 00:13:11.110 "base_bdevs": [ 00:13:11.110 "malloc1", 00:13:11.110 "malloc2", 00:13:11.110 "malloc3", 00:13:11.110 "malloc4" 00:13:11.110 ], 00:13:11.110 "superblock": false, 00:13:11.110 "method": "bdev_raid_create", 00:13:11.110 "req_id": 1 00:13:11.110 } 00:13:11.110 Got JSON-RPC error response 00:13:11.110 response: 00:13:11.110 { 00:13:11.110 "code": -17, 00:13:11.110 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:11.110 } 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:11.110 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.111 [2024-11-15 11:24:53.925361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:11.111 [2024-11-15 11:24:53.925426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.111 [2024-11-15 11:24:53.925454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:11.111 [2024-11-15 11:24:53.925473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.111 [2024-11-15 11:24:53.928635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.111 [2024-11-15 11:24:53.928718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:11.111 [2024-11-15 11:24:53.928809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:11.111 [2024-11-15 11:24:53.928881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:11.111 pt1 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.111 "name": "raid_bdev1", 00:13:11.111 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:11.111 "strip_size_kb": 0, 00:13:11.111 "state": "configuring", 00:13:11.111 "raid_level": "raid1", 00:13:11.111 "superblock": true, 00:13:11.111 "num_base_bdevs": 4, 00:13:11.111 "num_base_bdevs_discovered": 1, 00:13:11.111 "num_base_bdevs_operational": 4, 00:13:11.111 "base_bdevs_list": [ 00:13:11.111 { 00:13:11.111 "name": "pt1", 00:13:11.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.111 "is_configured": true, 00:13:11.111 "data_offset": 2048, 00:13:11.111 "data_size": 63488 00:13:11.111 }, 00:13:11.111 { 00:13:11.111 "name": null, 00:13:11.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.111 "is_configured": false, 00:13:11.111 "data_offset": 2048, 00:13:11.111 "data_size": 63488 00:13:11.111 }, 00:13:11.111 { 00:13:11.111 "name": null, 00:13:11.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.111 "is_configured": false, 00:13:11.111 "data_offset": 2048, 00:13:11.111 "data_size": 63488 00:13:11.111 }, 00:13:11.111 { 00:13:11.111 "name": null, 00:13:11.111 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.111 "is_configured": false, 00:13:11.111 "data_offset": 2048, 00:13:11.111 "data_size": 63488 00:13:11.111 } 00:13:11.111 ] 00:13:11.111 }' 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.111 11:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.678 [2024-11-15 11:24:54.457661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.678 [2024-11-15 11:24:54.457787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.678 [2024-11-15 11:24:54.457821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:11.678 [2024-11-15 11:24:54.457841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.678 [2024-11-15 11:24:54.458523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.678 [2024-11-15 11:24:54.458588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.678 [2024-11-15 11:24:54.458714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:11.678 [2024-11-15 11:24:54.458769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:11.678 pt2 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.678 [2024-11-15 11:24:54.465615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.678 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.679 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.679 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.679 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.679 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.679 "name": "raid_bdev1", 00:13:11.679 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:11.679 "strip_size_kb": 0, 00:13:11.679 "state": "configuring", 00:13:11.679 "raid_level": "raid1", 00:13:11.679 "superblock": true, 00:13:11.679 "num_base_bdevs": 4, 00:13:11.679 "num_base_bdevs_discovered": 1, 00:13:11.679 "num_base_bdevs_operational": 4, 00:13:11.679 "base_bdevs_list": [ 00:13:11.679 { 00:13:11.679 "name": "pt1", 00:13:11.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.679 "is_configured": true, 00:13:11.679 "data_offset": 2048, 00:13:11.679 "data_size": 63488 00:13:11.679 }, 00:13:11.679 { 00:13:11.679 "name": null, 00:13:11.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.679 "is_configured": false, 00:13:11.679 "data_offset": 0, 00:13:11.679 "data_size": 63488 00:13:11.679 }, 00:13:11.679 { 00:13:11.679 "name": null, 00:13:11.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.679 "is_configured": false, 00:13:11.679 "data_offset": 2048, 00:13:11.679 "data_size": 63488 00:13:11.679 }, 00:13:11.679 { 00:13:11.679 "name": null, 00:13:11.679 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.679 "is_configured": false, 00:13:11.679 "data_offset": 2048, 00:13:11.679 "data_size": 63488 00:13:11.679 } 00:13:11.679 ] 00:13:11.679 }' 00:13:11.679 11:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.679 11:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.245 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 [2024-11-15 11:24:55.025766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.246 [2024-11-15 11:24:55.025861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.246 [2024-11-15 11:24:55.025902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:12.246 [2024-11-15 11:24:55.025925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.246 [2024-11-15 11:24:55.026600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.246 [2024-11-15 11:24:55.026640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.246 [2024-11-15 11:24:55.026764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:12.246 [2024-11-15 11:24:55.026809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.246 pt2 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 [2024-11-15 11:24:55.033707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:12.246 [2024-11-15 11:24:55.033780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.246 [2024-11-15 11:24:55.033809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:12.246 [2024-11-15 11:24:55.033828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.246 [2024-11-15 11:24:55.034345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.246 [2024-11-15 11:24:55.034388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:12.246 [2024-11-15 11:24:55.034474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:12.246 [2024-11-15 11:24:55.034503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.246 pt3 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 [2024-11-15 11:24:55.041666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:12.246 [2024-11-15 11:24:55.041717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.246 [2024-11-15 11:24:55.041745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:12.246 [2024-11-15 11:24:55.041761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.246 [2024-11-15 11:24:55.042267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.246 [2024-11-15 11:24:55.042311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:12.246 [2024-11-15 11:24:55.042393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:12.246 [2024-11-15 11:24:55.042432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:12.246 [2024-11-15 11:24:55.042618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:12.246 [2024-11-15 11:24:55.042645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:12.246 [2024-11-15 11:24:55.042984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:12.246 [2024-11-15 11:24:55.043216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:12.246 [2024-11-15 11:24:55.043248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:12.246 [2024-11-15 11:24:55.043425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.246 pt4 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.246 "name": "raid_bdev1", 00:13:12.246 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:12.246 "strip_size_kb": 0, 00:13:12.246 "state": "online", 00:13:12.246 "raid_level": "raid1", 00:13:12.246 "superblock": true, 00:13:12.246 "num_base_bdevs": 4, 00:13:12.246 "num_base_bdevs_discovered": 4, 00:13:12.246 "num_base_bdevs_operational": 4, 00:13:12.246 "base_bdevs_list": [ 00:13:12.246 { 00:13:12.246 "name": "pt1", 00:13:12.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.246 "is_configured": true, 00:13:12.246 "data_offset": 2048, 00:13:12.246 "data_size": 63488 00:13:12.246 }, 00:13:12.246 { 00:13:12.246 "name": "pt2", 00:13:12.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.246 "is_configured": true, 00:13:12.246 "data_offset": 2048, 00:13:12.246 "data_size": 63488 00:13:12.246 }, 00:13:12.246 { 00:13:12.246 "name": "pt3", 00:13:12.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.246 "is_configured": true, 00:13:12.246 "data_offset": 2048, 00:13:12.246 "data_size": 63488 00:13:12.246 }, 00:13:12.246 { 00:13:12.246 "name": "pt4", 00:13:12.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.246 "is_configured": true, 00:13:12.246 "data_offset": 2048, 00:13:12.246 "data_size": 63488 00:13:12.246 } 00:13:12.246 ] 00:13:12.246 }' 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.246 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 [2024-11-15 11:24:55.566349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:12.812 "name": "raid_bdev1", 00:13:12.812 "aliases": [ 00:13:12.812 "7ae3543b-925c-4c19-9b7c-304b208717fc" 00:13:12.812 ], 00:13:12.812 "product_name": "Raid Volume", 00:13:12.812 "block_size": 512, 00:13:12.812 "num_blocks": 63488, 00:13:12.812 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:12.812 "assigned_rate_limits": { 00:13:12.812 "rw_ios_per_sec": 0, 00:13:12.812 "rw_mbytes_per_sec": 0, 00:13:12.812 "r_mbytes_per_sec": 0, 00:13:12.812 "w_mbytes_per_sec": 0 00:13:12.812 }, 00:13:12.812 "claimed": false, 00:13:12.812 "zoned": false, 00:13:12.812 "supported_io_types": { 00:13:12.812 "read": true, 00:13:12.812 "write": true, 00:13:12.812 "unmap": false, 00:13:12.812 "flush": false, 00:13:12.812 "reset": true, 00:13:12.812 "nvme_admin": false, 00:13:12.812 "nvme_io": false, 00:13:12.812 "nvme_io_md": false, 00:13:12.812 "write_zeroes": true, 00:13:12.812 "zcopy": false, 00:13:12.812 "get_zone_info": false, 00:13:12.812 "zone_management": false, 00:13:12.812 "zone_append": false, 00:13:12.812 "compare": false, 00:13:12.812 "compare_and_write": false, 00:13:12.812 "abort": false, 00:13:12.812 "seek_hole": false, 00:13:12.812 "seek_data": false, 00:13:12.812 "copy": false, 00:13:12.812 "nvme_iov_md": false 00:13:12.812 }, 00:13:12.812 "memory_domains": [ 00:13:12.812 { 00:13:12.812 "dma_device_id": "system", 00:13:12.812 "dma_device_type": 1 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.812 "dma_device_type": 2 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "dma_device_id": "system", 00:13:12.812 "dma_device_type": 1 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.812 "dma_device_type": 2 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "dma_device_id": "system", 00:13:12.812 "dma_device_type": 1 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.812 "dma_device_type": 2 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "dma_device_id": "system", 00:13:12.812 "dma_device_type": 1 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.812 "dma_device_type": 2 00:13:12.812 } 00:13:12.812 ], 00:13:12.812 "driver_specific": { 00:13:12.812 "raid": { 00:13:12.812 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:12.812 "strip_size_kb": 0, 00:13:12.812 "state": "online", 00:13:12.812 "raid_level": "raid1", 00:13:12.812 "superblock": true, 00:13:12.812 "num_base_bdevs": 4, 00:13:12.812 "num_base_bdevs_discovered": 4, 00:13:12.812 "num_base_bdevs_operational": 4, 00:13:12.812 "base_bdevs_list": [ 00:13:12.812 { 00:13:12.812 "name": "pt1", 00:13:12.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.812 "is_configured": true, 00:13:12.812 "data_offset": 2048, 00:13:12.812 "data_size": 63488 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "name": "pt2", 00:13:12.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.812 "is_configured": true, 00:13:12.812 "data_offset": 2048, 00:13:12.812 "data_size": 63488 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "name": "pt3", 00:13:12.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.812 "is_configured": true, 00:13:12.812 "data_offset": 2048, 00:13:12.812 "data_size": 63488 00:13:12.812 }, 00:13:12.812 { 00:13:12.812 "name": "pt4", 00:13:12.812 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.812 "is_configured": true, 00:13:12.812 "data_offset": 2048, 00:13:12.812 "data_size": 63488 00:13:12.812 } 00:13:12.812 ] 00:13:12.812 } 00:13:12.812 } 00:13:12.812 }' 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:12.812 pt2 00:13:12.812 pt3 00:13:12.812 pt4' 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.812 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.813 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.071 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.072 [2024-11-15 11:24:55.918371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7ae3543b-925c-4c19-9b7c-304b208717fc '!=' 7ae3543b-925c-4c19-9b7c-304b208717fc ']' 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.072 [2024-11-15 11:24:55.957999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.072 11:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.072 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.072 "name": "raid_bdev1", 00:13:13.072 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:13.072 "strip_size_kb": 0, 00:13:13.072 "state": "online", 00:13:13.072 "raid_level": "raid1", 00:13:13.072 "superblock": true, 00:13:13.072 "num_base_bdevs": 4, 00:13:13.072 "num_base_bdevs_discovered": 3, 00:13:13.072 "num_base_bdevs_operational": 3, 00:13:13.072 "base_bdevs_list": [ 00:13:13.072 { 00:13:13.072 "name": null, 00:13:13.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.072 "is_configured": false, 00:13:13.072 "data_offset": 0, 00:13:13.072 "data_size": 63488 00:13:13.072 }, 00:13:13.072 { 00:13:13.072 "name": "pt2", 00:13:13.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.072 "is_configured": true, 00:13:13.072 "data_offset": 2048, 00:13:13.072 "data_size": 63488 00:13:13.072 }, 00:13:13.072 { 00:13:13.072 "name": "pt3", 00:13:13.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.072 "is_configured": true, 00:13:13.072 "data_offset": 2048, 00:13:13.072 "data_size": 63488 00:13:13.072 }, 00:13:13.072 { 00:13:13.072 "name": "pt4", 00:13:13.072 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.072 "is_configured": true, 00:13:13.072 "data_offset": 2048, 00:13:13.072 "data_size": 63488 00:13:13.072 } 00:13:13.072 ] 00:13:13.072 }' 00:13:13.072 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.072 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 [2024-11-15 11:24:56.498147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.639 [2024-11-15 11:24:56.498203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.639 [2024-11-15 11:24:56.498322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.639 [2024-11-15 11:24:56.498434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.639 [2024-11-15 11:24:56.498451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.639 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:13.897 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:13.897 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.897 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.897 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.897 [2024-11-15 11:24:56.594135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.897 [2024-11-15 11:24:56.594226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.897 [2024-11-15 11:24:56.594260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:13.897 [2024-11-15 11:24:56.594276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.897 [2024-11-15 11:24:56.597469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.897 [2024-11-15 11:24:56.597516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.897 [2024-11-15 11:24:56.597628] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.897 [2024-11-15 11:24:56.597691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.898 pt2 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.898 "name": "raid_bdev1", 00:13:13.898 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:13.898 "strip_size_kb": 0, 00:13:13.898 "state": "configuring", 00:13:13.898 "raid_level": "raid1", 00:13:13.898 "superblock": true, 00:13:13.898 "num_base_bdevs": 4, 00:13:13.898 "num_base_bdevs_discovered": 1, 00:13:13.898 "num_base_bdevs_operational": 3, 00:13:13.898 "base_bdevs_list": [ 00:13:13.898 { 00:13:13.898 "name": null, 00:13:13.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.898 "is_configured": false, 00:13:13.898 "data_offset": 2048, 00:13:13.898 "data_size": 63488 00:13:13.898 }, 00:13:13.898 { 00:13:13.898 "name": "pt2", 00:13:13.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.898 "is_configured": true, 00:13:13.898 "data_offset": 2048, 00:13:13.898 "data_size": 63488 00:13:13.898 }, 00:13:13.898 { 00:13:13.898 "name": null, 00:13:13.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.898 "is_configured": false, 00:13:13.898 "data_offset": 2048, 00:13:13.898 "data_size": 63488 00:13:13.898 }, 00:13:13.898 { 00:13:13.898 "name": null, 00:13:13.898 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.898 "is_configured": false, 00:13:13.898 "data_offset": 2048, 00:13:13.898 "data_size": 63488 00:13:13.898 } 00:13:13.898 ] 00:13:13.898 }' 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.898 11:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.465 [2024-11-15 11:24:57.122354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:14.465 [2024-11-15 11:24:57.122494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.465 [2024-11-15 11:24:57.122547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:14.465 [2024-11-15 11:24:57.122563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.465 [2024-11-15 11:24:57.123313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.465 [2024-11-15 11:24:57.123356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:14.465 [2024-11-15 11:24:57.123491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:14.465 [2024-11-15 11:24:57.123526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:14.465 pt3 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.465 "name": "raid_bdev1", 00:13:14.465 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:14.465 "strip_size_kb": 0, 00:13:14.465 "state": "configuring", 00:13:14.465 "raid_level": "raid1", 00:13:14.465 "superblock": true, 00:13:14.465 "num_base_bdevs": 4, 00:13:14.465 "num_base_bdevs_discovered": 2, 00:13:14.465 "num_base_bdevs_operational": 3, 00:13:14.465 "base_bdevs_list": [ 00:13:14.465 { 00:13:14.465 "name": null, 00:13:14.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.465 "is_configured": false, 00:13:14.465 "data_offset": 2048, 00:13:14.465 "data_size": 63488 00:13:14.465 }, 00:13:14.465 { 00:13:14.465 "name": "pt2", 00:13:14.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.465 "is_configured": true, 00:13:14.465 "data_offset": 2048, 00:13:14.465 "data_size": 63488 00:13:14.465 }, 00:13:14.465 { 00:13:14.465 "name": "pt3", 00:13:14.465 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.465 "is_configured": true, 00:13:14.465 "data_offset": 2048, 00:13:14.465 "data_size": 63488 00:13:14.465 }, 00:13:14.465 { 00:13:14.465 "name": null, 00:13:14.465 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:14.465 "is_configured": false, 00:13:14.465 "data_offset": 2048, 00:13:14.465 "data_size": 63488 00:13:14.465 } 00:13:14.465 ] 00:13:14.465 }' 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.465 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.724 [2024-11-15 11:24:57.654552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:14.724 [2024-11-15 11:24:57.654679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.724 [2024-11-15 11:24:57.654724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:14.724 [2024-11-15 11:24:57.654740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.724 [2024-11-15 11:24:57.655394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.724 [2024-11-15 11:24:57.655432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:14.724 [2024-11-15 11:24:57.655560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:14.724 [2024-11-15 11:24:57.655595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:14.724 [2024-11-15 11:24:57.655778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:14.724 [2024-11-15 11:24:57.655805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:14.724 [2024-11-15 11:24:57.656133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:14.724 [2024-11-15 11:24:57.656373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:14.724 [2024-11-15 11:24:57.656407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:14.724 [2024-11-15 11:24:57.656599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.724 pt4 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.724 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.982 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.982 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.982 "name": "raid_bdev1", 00:13:14.982 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:14.982 "strip_size_kb": 0, 00:13:14.982 "state": "online", 00:13:14.982 "raid_level": "raid1", 00:13:14.982 "superblock": true, 00:13:14.982 "num_base_bdevs": 4, 00:13:14.982 "num_base_bdevs_discovered": 3, 00:13:14.982 "num_base_bdevs_operational": 3, 00:13:14.982 "base_bdevs_list": [ 00:13:14.982 { 00:13:14.982 "name": null, 00:13:14.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.982 "is_configured": false, 00:13:14.982 "data_offset": 2048, 00:13:14.982 "data_size": 63488 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "name": "pt2", 00:13:14.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.982 "is_configured": true, 00:13:14.982 "data_offset": 2048, 00:13:14.982 "data_size": 63488 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "name": "pt3", 00:13:14.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.982 "is_configured": true, 00:13:14.982 "data_offset": 2048, 00:13:14.982 "data_size": 63488 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "name": "pt4", 00:13:14.982 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:14.982 "is_configured": true, 00:13:14.982 "data_offset": 2048, 00:13:14.982 "data_size": 63488 00:13:14.982 } 00:13:14.982 ] 00:13:14.982 }' 00:13:14.982 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.982 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.241 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.241 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.241 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.241 [2024-11-15 11:24:58.182668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.241 [2024-11-15 11:24:58.182707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.241 [2024-11-15 11:24:58.182830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.241 [2024-11-15 11:24:58.182928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.241 [2024-11-15 11:24:58.182981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:15.241 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.503 [2024-11-15 11:24:58.254682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:15.503 [2024-11-15 11:24:58.254785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.503 [2024-11-15 11:24:58.254814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:15.503 [2024-11-15 11:24:58.254832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.503 [2024-11-15 11:24:58.258080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.503 [2024-11-15 11:24:58.258147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:15.503 [2024-11-15 11:24:58.258277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:15.503 [2024-11-15 11:24:58.258344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:15.503 [2024-11-15 11:24:58.258518] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:15.503 [2024-11-15 11:24:58.258553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.503 [2024-11-15 11:24:58.258576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:15.503 [2024-11-15 11:24:58.258652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:15.503 [2024-11-15 11:24:58.258799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:15.503 pt1 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.503 "name": "raid_bdev1", 00:13:15.503 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:15.503 "strip_size_kb": 0, 00:13:15.503 "state": "configuring", 00:13:15.503 "raid_level": "raid1", 00:13:15.503 "superblock": true, 00:13:15.503 "num_base_bdevs": 4, 00:13:15.503 "num_base_bdevs_discovered": 2, 00:13:15.503 "num_base_bdevs_operational": 3, 00:13:15.503 "base_bdevs_list": [ 00:13:15.503 { 00:13:15.503 "name": null, 00:13:15.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.503 "is_configured": false, 00:13:15.503 "data_offset": 2048, 00:13:15.503 "data_size": 63488 00:13:15.503 }, 00:13:15.503 { 00:13:15.503 "name": "pt2", 00:13:15.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.503 "is_configured": true, 00:13:15.503 "data_offset": 2048, 00:13:15.503 "data_size": 63488 00:13:15.503 }, 00:13:15.503 { 00:13:15.503 "name": "pt3", 00:13:15.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:15.503 "is_configured": true, 00:13:15.503 "data_offset": 2048, 00:13:15.503 "data_size": 63488 00:13:15.503 }, 00:13:15.503 { 00:13:15.503 "name": null, 00:13:15.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:15.503 "is_configured": false, 00:13:15.503 "data_offset": 2048, 00:13:15.503 "data_size": 63488 00:13:15.503 } 00:13:15.503 ] 00:13:15.503 }' 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.503 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.080 [2024-11-15 11:24:58.843106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:16.080 [2024-11-15 11:24:58.843227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.080 [2024-11-15 11:24:58.843271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:16.080 [2024-11-15 11:24:58.843288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.080 [2024-11-15 11:24:58.843921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.080 [2024-11-15 11:24:58.843958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:16.080 [2024-11-15 11:24:58.844081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:16.080 [2024-11-15 11:24:58.844117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:16.080 [2024-11-15 11:24:58.844318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:16.080 [2024-11-15 11:24:58.844335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.080 [2024-11-15 11:24:58.844672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:16.080 [2024-11-15 11:24:58.844897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:16.080 [2024-11-15 11:24:58.844921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:16.080 [2024-11-15 11:24:58.845146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.080 pt4 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.080 "name": "raid_bdev1", 00:13:16.080 "uuid": "7ae3543b-925c-4c19-9b7c-304b208717fc", 00:13:16.080 "strip_size_kb": 0, 00:13:16.080 "state": "online", 00:13:16.080 "raid_level": "raid1", 00:13:16.080 "superblock": true, 00:13:16.080 "num_base_bdevs": 4, 00:13:16.080 "num_base_bdevs_discovered": 3, 00:13:16.080 "num_base_bdevs_operational": 3, 00:13:16.080 "base_bdevs_list": [ 00:13:16.080 { 00:13:16.080 "name": null, 00:13:16.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.080 "is_configured": false, 00:13:16.080 "data_offset": 2048, 00:13:16.080 "data_size": 63488 00:13:16.080 }, 00:13:16.080 { 00:13:16.080 "name": "pt2", 00:13:16.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.080 "is_configured": true, 00:13:16.080 "data_offset": 2048, 00:13:16.080 "data_size": 63488 00:13:16.080 }, 00:13:16.080 { 00:13:16.080 "name": "pt3", 00:13:16.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.080 "is_configured": true, 00:13:16.080 "data_offset": 2048, 00:13:16.080 "data_size": 63488 00:13:16.080 }, 00:13:16.080 { 00:13:16.080 "name": "pt4", 00:13:16.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:16.080 "is_configured": true, 00:13:16.080 "data_offset": 2048, 00:13:16.080 "data_size": 63488 00:13:16.080 } 00:13:16.080 ] 00:13:16.080 }' 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.080 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.647 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:16.648 [2024-11-15 11:24:59.415696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7ae3543b-925c-4c19-9b7c-304b208717fc '!=' 7ae3543b-925c-4c19-9b7c-304b208717fc ']' 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74562 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74562 ']' 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74562 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74562 00:13:16.648 killing process with pid 74562 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74562' 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74562 00:13:16.648 [2024-11-15 11:24:59.495752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.648 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74562 00:13:16.648 [2024-11-15 11:24:59.495901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.648 [2024-11-15 11:24:59.496005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.648 [2024-11-15 11:24:59.496043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:17.215 [2024-11-15 11:24:59.869727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.151 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:18.151 00:13:18.151 real 0m9.586s 00:13:18.151 user 0m15.629s 00:13:18.151 sys 0m1.429s 00:13:18.151 ************************************ 00:13:18.151 END TEST raid_superblock_test 00:13:18.151 ************************************ 00:13:18.151 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:18.151 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.151 11:25:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:18.151 11:25:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:18.151 11:25:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:18.151 11:25:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.151 ************************************ 00:13:18.151 START TEST raid_read_error_test 00:13:18.151 ************************************ 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TQaNqHSPuf 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75059 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75059 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75059 ']' 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:18.151 11:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.409 [2024-11-15 11:25:01.174639] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:13:18.409 [2024-11-15 11:25:01.174813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75059 ] 00:13:18.668 [2024-11-15 11:25:01.362251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.668 [2024-11-15 11:25:01.506124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.927 [2024-11-15 11:25:01.727669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.927 [2024-11-15 11:25:01.727755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.494 BaseBdev1_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.494 true 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.494 [2024-11-15 11:25:02.224798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:19.494 [2024-11-15 11:25:02.225556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.494 [2024-11-15 11:25:02.225684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:19.494 [2024-11-15 11:25:02.225784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.494 [2024-11-15 11:25:02.229008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.494 [2024-11-15 11:25:02.229306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:19.494 BaseBdev1 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.494 BaseBdev2_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.494 true 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.494 [2024-11-15 11:25:02.286074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:19.494 [2024-11-15 11:25:02.286388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.494 [2024-11-15 11:25:02.286494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:19.494 [2024-11-15 11:25:02.286587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.494 [2024-11-15 11:25:02.289721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.494 BaseBdev2 00:13:19.494 [2024-11-15 11:25:02.289901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.494 BaseBdev3_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.494 true 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.494 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.495 [2024-11-15 11:25:02.363683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:19.495 [2024-11-15 11:25:02.363890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.495 [2024-11-15 11:25:02.363931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:19.495 [2024-11-15 11:25:02.363953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.495 [2024-11-15 11:25:02.366992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.495 [2024-11-15 11:25:02.367180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:19.495 BaseBdev3 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.495 BaseBdev4_malloc 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.495 true 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.495 [2024-11-15 11:25:02.424449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:19.495 [2024-11-15 11:25:02.424525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.495 [2024-11-15 11:25:02.424556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:19.495 [2024-11-15 11:25:02.424575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.495 [2024-11-15 11:25:02.427645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.495 [2024-11-15 11:25:02.427830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:19.495 BaseBdev4 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.495 [2024-11-15 11:25:02.432704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.495 [2024-11-15 11:25:02.435546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.495 [2024-11-15 11:25:02.435661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.495 [2024-11-15 11:25:02.435765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:19.495 [2024-11-15 11:25:02.436096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:19.495 [2024-11-15 11:25:02.436137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:19.495 [2024-11-15 11:25:02.436480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:19.495 [2024-11-15 11:25:02.436723] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:19.495 [2024-11-15 11:25:02.436740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:19.495 [2024-11-15 11:25:02.436974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.495 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.754 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.754 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.754 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.754 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.754 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.754 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.754 "name": "raid_bdev1", 00:13:19.754 "uuid": "da7cab9c-ccae-4421-bf49-29215a492b37", 00:13:19.754 "strip_size_kb": 0, 00:13:19.754 "state": "online", 00:13:19.754 "raid_level": "raid1", 00:13:19.754 "superblock": true, 00:13:19.754 "num_base_bdevs": 4, 00:13:19.754 "num_base_bdevs_discovered": 4, 00:13:19.754 "num_base_bdevs_operational": 4, 00:13:19.754 "base_bdevs_list": [ 00:13:19.754 { 00:13:19.754 "name": "BaseBdev1", 00:13:19.754 "uuid": "52072aa5-dc06-5ed6-8d23-4b6120260f02", 00:13:19.754 "is_configured": true, 00:13:19.754 "data_offset": 2048, 00:13:19.754 "data_size": 63488 00:13:19.754 }, 00:13:19.754 { 00:13:19.754 "name": "BaseBdev2", 00:13:19.754 "uuid": "611ca01b-f0e0-5856-b5b2-ca847fcdf6eb", 00:13:19.754 "is_configured": true, 00:13:19.754 "data_offset": 2048, 00:13:19.754 "data_size": 63488 00:13:19.754 }, 00:13:19.754 { 00:13:19.754 "name": "BaseBdev3", 00:13:19.754 "uuid": "7c4fa040-65c6-5da0-96d2-b66e76d2c188", 00:13:19.754 "is_configured": true, 00:13:19.754 "data_offset": 2048, 00:13:19.754 "data_size": 63488 00:13:19.754 }, 00:13:19.754 { 00:13:19.754 "name": "BaseBdev4", 00:13:19.754 "uuid": "0691f48a-afb2-5dd7-9304-32f20356f8b6", 00:13:19.754 "is_configured": true, 00:13:19.754 "data_offset": 2048, 00:13:19.754 "data_size": 63488 00:13:19.754 } 00:13:19.754 ] 00:13:19.754 }' 00:13:19.754 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.754 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.012 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:20.012 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:20.271 [2024-11-15 11:25:03.042752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.208 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.208 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.208 "name": "raid_bdev1", 00:13:21.208 "uuid": "da7cab9c-ccae-4421-bf49-29215a492b37", 00:13:21.208 "strip_size_kb": 0, 00:13:21.208 "state": "online", 00:13:21.208 "raid_level": "raid1", 00:13:21.208 "superblock": true, 00:13:21.208 "num_base_bdevs": 4, 00:13:21.208 "num_base_bdevs_discovered": 4, 00:13:21.208 "num_base_bdevs_operational": 4, 00:13:21.208 "base_bdevs_list": [ 00:13:21.208 { 00:13:21.208 "name": "BaseBdev1", 00:13:21.208 "uuid": "52072aa5-dc06-5ed6-8d23-4b6120260f02", 00:13:21.208 "is_configured": true, 00:13:21.208 "data_offset": 2048, 00:13:21.208 "data_size": 63488 00:13:21.208 }, 00:13:21.208 { 00:13:21.208 "name": "BaseBdev2", 00:13:21.208 "uuid": "611ca01b-f0e0-5856-b5b2-ca847fcdf6eb", 00:13:21.208 "is_configured": true, 00:13:21.208 "data_offset": 2048, 00:13:21.208 "data_size": 63488 00:13:21.208 }, 00:13:21.208 { 00:13:21.208 "name": "BaseBdev3", 00:13:21.208 "uuid": "7c4fa040-65c6-5da0-96d2-b66e76d2c188", 00:13:21.208 "is_configured": true, 00:13:21.208 "data_offset": 2048, 00:13:21.208 "data_size": 63488 00:13:21.208 }, 00:13:21.208 { 00:13:21.208 "name": "BaseBdev4", 00:13:21.208 "uuid": "0691f48a-afb2-5dd7-9304-32f20356f8b6", 00:13:21.208 "is_configured": true, 00:13:21.208 "data_offset": 2048, 00:13:21.208 "data_size": 63488 00:13:21.208 } 00:13:21.208 ] 00:13:21.208 }' 00:13:21.208 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.209 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.777 [2024-11-15 11:25:04.499915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:21.777 [2024-11-15 11:25:04.500152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.777 [2024-11-15 11:25:04.504063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.777 [2024-11-15 11:25:04.504387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.777 [2024-11-15 11:25:04.504699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:13:21.777 "results": [ 00:13:21.777 { 00:13:21.777 "job": "raid_bdev1", 00:13:21.777 "core_mask": "0x1", 00:13:21.777 "workload": "randrw", 00:13:21.777 "percentage": 50, 00:13:21.777 "status": "finished", 00:13:21.777 "queue_depth": 1, 00:13:21.777 "io_size": 131072, 00:13:21.777 "runtime": 1.454878, 00:13:21.777 "iops": 6212.89207754877, 00:13:21.777 "mibps": 776.6115096935963, 00:13:21.777 "io_failed": 0, 00:13:21.777 "io_timeout": 0, 00:13:21.777 "avg_latency_us": 156.29676814611435, 00:13:21.777 "min_latency_us": 43.054545454545455, 00:13:21.777 "max_latency_us": 2100.130909090909 00:13:21.777 } 00:13:21.777 ], 00:13:21.777 "core_count": 1 00:13:21.777 } 00:13:21.777 ee all in destruct 00:13:21.777 [2024-11-15 11:25:04.504851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75059 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75059 ']' 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75059 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75059 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75059' 00:13:21.777 killing process with pid 75059 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75059 00:13:21.777 [2024-11-15 11:25:04.545793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.777 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75059 00:13:22.035 [2024-11-15 11:25:04.868444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TQaNqHSPuf 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:23.411 ************************************ 00:13:23.411 END TEST raid_read_error_test 00:13:23.411 ************************************ 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:23.411 00:13:23.411 real 0m5.079s 00:13:23.411 user 0m6.132s 00:13:23.411 sys 0m0.670s 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.411 11:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.411 11:25:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:23.411 11:25:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:23.411 11:25:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:23.411 11:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.411 ************************************ 00:13:23.411 START TEST raid_write_error_test 00:13:23.411 ************************************ 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uw7q09Rpyf 00:13:23.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75206 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75206 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75206 ']' 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:23.411 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.411 [2024-11-15 11:25:06.302716] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:13:23.411 [2024-11-15 11:25:06.303040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75206 ] 00:13:23.670 [2024-11-15 11:25:06.480469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.928 [2024-11-15 11:25:06.635126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.928 [2024-11-15 11:25:06.860410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.928 [2024-11-15 11:25:06.860503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.495 BaseBdev1_malloc 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.495 true 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.495 [2024-11-15 11:25:07.340846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:24.495 [2024-11-15 11:25:07.340939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.495 [2024-11-15 11:25:07.340971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:24.495 [2024-11-15 11:25:07.340990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.495 [2024-11-15 11:25:07.344088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.495 [2024-11-15 11:25:07.344154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.495 BaseBdev1 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.495 BaseBdev2_malloc 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.495 true 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.495 [2024-11-15 11:25:07.406145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:24.495 [2024-11-15 11:25:07.406229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.495 [2024-11-15 11:25:07.406258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:24.495 [2024-11-15 11:25:07.406277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.495 [2024-11-15 11:25:07.409389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.495 [2024-11-15 11:25:07.409443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:24.495 BaseBdev2 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.495 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.496 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:24.496 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.496 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.755 BaseBdev3_malloc 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.755 true 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.755 [2024-11-15 11:25:07.484090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:24.755 [2024-11-15 11:25:07.484204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.755 [2024-11-15 11:25:07.484235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:24.755 [2024-11-15 11:25:07.484255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.755 [2024-11-15 11:25:07.487476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.755 [2024-11-15 11:25:07.487559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:24.755 BaseBdev3 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.755 BaseBdev4_malloc 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.755 true 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.755 [2024-11-15 11:25:07.550622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:24.755 [2024-11-15 11:25:07.550711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.755 [2024-11-15 11:25:07.550741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:24.755 [2024-11-15 11:25:07.550760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.755 [2024-11-15 11:25:07.553839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.755 [2024-11-15 11:25:07.553895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:24.755 BaseBdev4 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.755 [2024-11-15 11:25:07.558771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.755 [2024-11-15 11:25:07.561503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.755 [2024-11-15 11:25:07.561625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.755 [2024-11-15 11:25:07.561729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.755 [2024-11-15 11:25:07.562068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:24.755 [2024-11-15 11:25:07.562107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.755 [2024-11-15 11:25:07.562448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:24.755 [2024-11-15 11:25:07.562719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:24.755 [2024-11-15 11:25:07.562747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:24.755 [2024-11-15 11:25:07.563033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.755 "name": "raid_bdev1", 00:13:24.755 "uuid": "4fb605e0-8510-497b-8d28-d72a780dd846", 00:13:24.755 "strip_size_kb": 0, 00:13:24.755 "state": "online", 00:13:24.755 "raid_level": "raid1", 00:13:24.755 "superblock": true, 00:13:24.755 "num_base_bdevs": 4, 00:13:24.755 "num_base_bdevs_discovered": 4, 00:13:24.755 "num_base_bdevs_operational": 4, 00:13:24.755 "base_bdevs_list": [ 00:13:24.755 { 00:13:24.755 "name": "BaseBdev1", 00:13:24.755 "uuid": "e106d8aa-b5e1-5c6e-850b-41dd43eedb93", 00:13:24.755 "is_configured": true, 00:13:24.755 "data_offset": 2048, 00:13:24.755 "data_size": 63488 00:13:24.755 }, 00:13:24.755 { 00:13:24.755 "name": "BaseBdev2", 00:13:24.755 "uuid": "ea3dc745-d2f4-57e3-865b-4f23c5066aca", 00:13:24.755 "is_configured": true, 00:13:24.755 "data_offset": 2048, 00:13:24.755 "data_size": 63488 00:13:24.755 }, 00:13:24.755 { 00:13:24.755 "name": "BaseBdev3", 00:13:24.755 "uuid": "7cf4296a-fbc3-5c89-b663-522b8a9afb58", 00:13:24.755 "is_configured": true, 00:13:24.755 "data_offset": 2048, 00:13:24.755 "data_size": 63488 00:13:24.755 }, 00:13:24.755 { 00:13:24.755 "name": "BaseBdev4", 00:13:24.755 "uuid": "eefdaa76-267f-5f81-aaf7-5acda42d4c18", 00:13:24.755 "is_configured": true, 00:13:24.755 "data_offset": 2048, 00:13:24.755 "data_size": 63488 00:13:24.755 } 00:13:24.755 ] 00:13:24.755 }' 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.755 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.322 11:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:25.322 11:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:25.322 [2024-11-15 11:25:08.216692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.257 [2024-11-15 11:25:09.090404] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:26.257 [2024-11-15 11:25:09.090480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:26.257 [2024-11-15 11:25:09.090806] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.257 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.258 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.258 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.258 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.258 "name": "raid_bdev1", 00:13:26.258 "uuid": "4fb605e0-8510-497b-8d28-d72a780dd846", 00:13:26.258 "strip_size_kb": 0, 00:13:26.258 "state": "online", 00:13:26.258 "raid_level": "raid1", 00:13:26.258 "superblock": true, 00:13:26.258 "num_base_bdevs": 4, 00:13:26.258 "num_base_bdevs_discovered": 3, 00:13:26.258 "num_base_bdevs_operational": 3, 00:13:26.258 "base_bdevs_list": [ 00:13:26.258 { 00:13:26.258 "name": null, 00:13:26.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.258 "is_configured": false, 00:13:26.258 "data_offset": 0, 00:13:26.258 "data_size": 63488 00:13:26.258 }, 00:13:26.258 { 00:13:26.258 "name": "BaseBdev2", 00:13:26.258 "uuid": "ea3dc745-d2f4-57e3-865b-4f23c5066aca", 00:13:26.258 "is_configured": true, 00:13:26.258 "data_offset": 2048, 00:13:26.258 "data_size": 63488 00:13:26.258 }, 00:13:26.258 { 00:13:26.258 "name": "BaseBdev3", 00:13:26.258 "uuid": "7cf4296a-fbc3-5c89-b663-522b8a9afb58", 00:13:26.258 "is_configured": true, 00:13:26.258 "data_offset": 2048, 00:13:26.258 "data_size": 63488 00:13:26.258 }, 00:13:26.258 { 00:13:26.258 "name": "BaseBdev4", 00:13:26.258 "uuid": "eefdaa76-267f-5f81-aaf7-5acda42d4c18", 00:13:26.258 "is_configured": true, 00:13:26.258 "data_offset": 2048, 00:13:26.258 "data_size": 63488 00:13:26.258 } 00:13:26.258 ] 00:13:26.258 }' 00:13:26.258 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.258 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.825 [2024-11-15 11:25:09.620537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.825 [2024-11-15 11:25:09.620624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.825 [2024-11-15 11:25:09.624291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.825 [2024-11-15 11:25:09.624359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.825 [2024-11-15 11:25:09.624602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.825 [2024-11-15 11:25:09.624634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:26.825 { 00:13:26.825 "results": [ 00:13:26.825 { 00:13:26.825 "job": "raid_bdev1", 00:13:26.825 "core_mask": "0x1", 00:13:26.825 "workload": "randrw", 00:13:26.825 "percentage": 50, 00:13:26.825 "status": "finished", 00:13:26.825 "queue_depth": 1, 00:13:26.825 "io_size": 131072, 00:13:26.825 "runtime": 1.400971, 00:13:26.825 "iops": 6622.5496459241485, 00:13:26.825 "mibps": 827.8187057405186, 00:13:26.825 "io_failed": 0, 00:13:26.825 "io_timeout": 0, 00:13:26.825 "avg_latency_us": 146.11587920594172, 00:13:26.825 "min_latency_us": 44.21818181818182, 00:13:26.825 "max_latency_us": 2010.7636363636364 00:13:26.825 } 00:13:26.825 ], 00:13:26.825 "core_count": 1 00:13:26.825 } 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75206 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75206 ']' 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75206 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75206 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75206' 00:13:26.825 killing process with pid 75206 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75206 00:13:26.825 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75206 00:13:26.825 [2024-11-15 11:25:09.668480] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.084 [2024-11-15 11:25:09.963760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uw7q09Rpyf 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:28.459 00:13:28.459 real 0m4.962s 00:13:28.459 user 0m6.009s 00:13:28.459 sys 0m0.698s 00:13:28.459 ************************************ 00:13:28.459 END TEST raid_write_error_test 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:28.459 11:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.459 ************************************ 00:13:28.459 11:25:11 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:28.459 11:25:11 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:28.459 11:25:11 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:28.459 11:25:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:28.459 11:25:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:28.459 11:25:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.459 ************************************ 00:13:28.459 START TEST raid_rebuild_test 00:13:28.459 ************************************ 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:28.459 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75354 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75354 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75354 ']' 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:28.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:28.460 11:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.460 [2024-11-15 11:25:11.310318] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:13:28.460 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.460 Zero copy mechanism will not be used. 00:13:28.460 [2024-11-15 11:25:11.310488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75354 ] 00:13:28.718 [2024-11-15 11:25:11.487090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.718 [2024-11-15 11:25:11.635519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.976 [2024-11-15 11:25:11.858430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.976 [2024-11-15 11:25:11.858547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.542 BaseBdev1_malloc 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.542 [2024-11-15 11:25:12.345219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.542 [2024-11-15 11:25:12.345330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.542 [2024-11-15 11:25:12.345368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:29.542 [2024-11-15 11:25:12.345389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.542 [2024-11-15 11:25:12.348428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.542 [2024-11-15 11:25:12.348524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.542 BaseBdev1 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.542 BaseBdev2_malloc 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.542 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.543 [2024-11-15 11:25:12.404883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:29.543 [2024-11-15 11:25:12.404999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.543 [2024-11-15 11:25:12.405036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:29.543 [2024-11-15 11:25:12.405055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.543 [2024-11-15 11:25:12.408039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.543 [2024-11-15 11:25:12.408103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.543 BaseBdev2 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.543 spare_malloc 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.543 spare_delay 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.543 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.801 [2024-11-15 11:25:12.495028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.801 [2024-11-15 11:25:12.495142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.801 [2024-11-15 11:25:12.495171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:29.801 [2024-11-15 11:25:12.495224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.801 [2024-11-15 11:25:12.498495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.801 [2024-11-15 11:25:12.498591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.801 spare 00:13:29.801 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.801 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:29.801 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.801 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.801 [2024-11-15 11:25:12.507345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.801 [2024-11-15 11:25:12.509998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.801 [2024-11-15 11:25:12.510170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:29.801 [2024-11-15 11:25:12.510222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:29.801 [2024-11-15 11:25:12.510550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:29.801 [2024-11-15 11:25:12.510821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:29.802 [2024-11-15 11:25:12.510850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:29.802 [2024-11-15 11:25:12.511038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.802 "name": "raid_bdev1", 00:13:29.802 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:29.802 "strip_size_kb": 0, 00:13:29.802 "state": "online", 00:13:29.802 "raid_level": "raid1", 00:13:29.802 "superblock": false, 00:13:29.802 "num_base_bdevs": 2, 00:13:29.802 "num_base_bdevs_discovered": 2, 00:13:29.802 "num_base_bdevs_operational": 2, 00:13:29.802 "base_bdevs_list": [ 00:13:29.802 { 00:13:29.802 "name": "BaseBdev1", 00:13:29.802 "uuid": "faa55a10-a9b3-59f8-9223-6453246b2db4", 00:13:29.802 "is_configured": true, 00:13:29.802 "data_offset": 0, 00:13:29.802 "data_size": 65536 00:13:29.802 }, 00:13:29.802 { 00:13:29.802 "name": "BaseBdev2", 00:13:29.802 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:29.802 "is_configured": true, 00:13:29.802 "data_offset": 0, 00:13:29.802 "data_size": 65536 00:13:29.802 } 00:13:29.802 ] 00:13:29.802 }' 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.802 11:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.368 [2024-11-15 11:25:13.083923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:30.368 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.369 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:30.369 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.369 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.369 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:30.627 [2024-11-15 11:25:13.491723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:30.627 /dev/nbd0 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:30.627 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.627 1+0 records in 00:13:30.627 1+0 records out 00:13:30.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044627 s, 9.2 MB/s 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:30.628 11:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:37.258 65536+0 records in 00:13:37.258 65536+0 records out 00:13:37.258 33554432 bytes (34 MB, 32 MiB) copied, 6.41182 s, 5.2 MB/s 00:13:37.258 11:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:37.258 11:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.258 11:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.258 11:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.258 11:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:37.258 11:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.258 11:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:37.516 [2024-11-15 11:25:20.238485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.516 11:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.516 11:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.517 [2024-11-15 11:25:20.270626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.517 "name": "raid_bdev1", 00:13:37.517 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:37.517 "strip_size_kb": 0, 00:13:37.517 "state": "online", 00:13:37.517 "raid_level": "raid1", 00:13:37.517 "superblock": false, 00:13:37.517 "num_base_bdevs": 2, 00:13:37.517 "num_base_bdevs_discovered": 1, 00:13:37.517 "num_base_bdevs_operational": 1, 00:13:37.517 "base_bdevs_list": [ 00:13:37.517 { 00:13:37.517 "name": null, 00:13:37.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.517 "is_configured": false, 00:13:37.517 "data_offset": 0, 00:13:37.517 "data_size": 65536 00:13:37.517 }, 00:13:37.517 { 00:13:37.517 "name": "BaseBdev2", 00:13:37.517 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:37.517 "is_configured": true, 00:13:37.517 "data_offset": 0, 00:13:37.517 "data_size": 65536 00:13:37.517 } 00:13:37.517 ] 00:13:37.517 }' 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.517 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.086 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.086 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.086 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.086 [2024-11-15 11:25:20.730852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.086 [2024-11-15 11:25:20.746754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:38.086 11:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.086 11:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:38.086 [2024-11-15 11:25:20.749296] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.027 "name": "raid_bdev1", 00:13:39.027 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:39.027 "strip_size_kb": 0, 00:13:39.027 "state": "online", 00:13:39.027 "raid_level": "raid1", 00:13:39.027 "superblock": false, 00:13:39.027 "num_base_bdevs": 2, 00:13:39.027 "num_base_bdevs_discovered": 2, 00:13:39.027 "num_base_bdevs_operational": 2, 00:13:39.027 "process": { 00:13:39.027 "type": "rebuild", 00:13:39.027 "target": "spare", 00:13:39.027 "progress": { 00:13:39.027 "blocks": 20480, 00:13:39.027 "percent": 31 00:13:39.027 } 00:13:39.027 }, 00:13:39.027 "base_bdevs_list": [ 00:13:39.027 { 00:13:39.027 "name": "spare", 00:13:39.027 "uuid": "c28d77fe-a70e-5a3c-9bb5-a376cc7a99d0", 00:13:39.027 "is_configured": true, 00:13:39.027 "data_offset": 0, 00:13:39.027 "data_size": 65536 00:13:39.027 }, 00:13:39.027 { 00:13:39.027 "name": "BaseBdev2", 00:13:39.027 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:39.027 "is_configured": true, 00:13:39.027 "data_offset": 0, 00:13:39.027 "data_size": 65536 00:13:39.027 } 00:13:39.027 ] 00:13:39.027 }' 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.027 11:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.027 [2024-11-15 11:25:21.914603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.028 [2024-11-15 11:25:21.960154] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.028 [2024-11-15 11:25:21.960259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.028 [2024-11-15 11:25:21.960284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.028 [2024-11-15 11:25:21.960299] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.287 11:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.287 11:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.287 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.287 "name": "raid_bdev1", 00:13:39.287 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:39.287 "strip_size_kb": 0, 00:13:39.287 "state": "online", 00:13:39.287 "raid_level": "raid1", 00:13:39.287 "superblock": false, 00:13:39.287 "num_base_bdevs": 2, 00:13:39.287 "num_base_bdevs_discovered": 1, 00:13:39.287 "num_base_bdevs_operational": 1, 00:13:39.287 "base_bdevs_list": [ 00:13:39.287 { 00:13:39.287 "name": null, 00:13:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.287 "is_configured": false, 00:13:39.287 "data_offset": 0, 00:13:39.287 "data_size": 65536 00:13:39.287 }, 00:13:39.287 { 00:13:39.287 "name": "BaseBdev2", 00:13:39.287 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:39.287 "is_configured": true, 00:13:39.287 "data_offset": 0, 00:13:39.287 "data_size": 65536 00:13:39.287 } 00:13:39.287 ] 00:13:39.287 }' 00:13:39.287 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.287 11:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.546 11:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.805 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.805 "name": "raid_bdev1", 00:13:39.805 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:39.805 "strip_size_kb": 0, 00:13:39.805 "state": "online", 00:13:39.805 "raid_level": "raid1", 00:13:39.805 "superblock": false, 00:13:39.805 "num_base_bdevs": 2, 00:13:39.805 "num_base_bdevs_discovered": 1, 00:13:39.805 "num_base_bdevs_operational": 1, 00:13:39.805 "base_bdevs_list": [ 00:13:39.805 { 00:13:39.805 "name": null, 00:13:39.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.805 "is_configured": false, 00:13:39.805 "data_offset": 0, 00:13:39.805 "data_size": 65536 00:13:39.805 }, 00:13:39.805 { 00:13:39.805 "name": "BaseBdev2", 00:13:39.805 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:39.805 "is_configured": true, 00:13:39.805 "data_offset": 0, 00:13:39.805 "data_size": 65536 00:13:39.805 } 00:13:39.805 ] 00:13:39.805 }' 00:13:39.805 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.805 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.805 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.805 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.805 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.805 11:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.805 11:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.805 [2024-11-15 11:25:22.633308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.806 [2024-11-15 11:25:22.650891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:39.806 11:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.806 11:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:39.806 [2024-11-15 11:25:22.653719] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.741 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.741 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.742 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.742 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.742 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.742 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.742 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.742 11:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.742 11:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.742 11:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.000 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.000 "name": "raid_bdev1", 00:13:41.000 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:41.000 "strip_size_kb": 0, 00:13:41.000 "state": "online", 00:13:41.000 "raid_level": "raid1", 00:13:41.000 "superblock": false, 00:13:41.000 "num_base_bdevs": 2, 00:13:41.000 "num_base_bdevs_discovered": 2, 00:13:41.000 "num_base_bdevs_operational": 2, 00:13:41.000 "process": { 00:13:41.000 "type": "rebuild", 00:13:41.000 "target": "spare", 00:13:41.000 "progress": { 00:13:41.000 "blocks": 20480, 00:13:41.000 "percent": 31 00:13:41.000 } 00:13:41.000 }, 00:13:41.000 "base_bdevs_list": [ 00:13:41.000 { 00:13:41.000 "name": "spare", 00:13:41.000 "uuid": "c28d77fe-a70e-5a3c-9bb5-a376cc7a99d0", 00:13:41.000 "is_configured": true, 00:13:41.000 "data_offset": 0, 00:13:41.000 "data_size": 65536 00:13:41.000 }, 00:13:41.000 { 00:13:41.000 "name": "BaseBdev2", 00:13:41.000 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:41.000 "is_configured": true, 00:13:41.000 "data_offset": 0, 00:13:41.000 "data_size": 65536 00:13:41.000 } 00:13:41.000 ] 00:13:41.000 }' 00:13:41.000 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.000 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.000 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.001 "name": "raid_bdev1", 00:13:41.001 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:41.001 "strip_size_kb": 0, 00:13:41.001 "state": "online", 00:13:41.001 "raid_level": "raid1", 00:13:41.001 "superblock": false, 00:13:41.001 "num_base_bdevs": 2, 00:13:41.001 "num_base_bdevs_discovered": 2, 00:13:41.001 "num_base_bdevs_operational": 2, 00:13:41.001 "process": { 00:13:41.001 "type": "rebuild", 00:13:41.001 "target": "spare", 00:13:41.001 "progress": { 00:13:41.001 "blocks": 22528, 00:13:41.001 "percent": 34 00:13:41.001 } 00:13:41.001 }, 00:13:41.001 "base_bdevs_list": [ 00:13:41.001 { 00:13:41.001 "name": "spare", 00:13:41.001 "uuid": "c28d77fe-a70e-5a3c-9bb5-a376cc7a99d0", 00:13:41.001 "is_configured": true, 00:13:41.001 "data_offset": 0, 00:13:41.001 "data_size": 65536 00:13:41.001 }, 00:13:41.001 { 00:13:41.001 "name": "BaseBdev2", 00:13:41.001 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:41.001 "is_configured": true, 00:13:41.001 "data_offset": 0, 00:13:41.001 "data_size": 65536 00:13:41.001 } 00:13:41.001 ] 00:13:41.001 }' 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.001 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.259 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.259 11:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.193 11:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.193 11:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.193 11:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.193 "name": "raid_bdev1", 00:13:42.193 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:42.193 "strip_size_kb": 0, 00:13:42.193 "state": "online", 00:13:42.193 "raid_level": "raid1", 00:13:42.193 "superblock": false, 00:13:42.193 "num_base_bdevs": 2, 00:13:42.193 "num_base_bdevs_discovered": 2, 00:13:42.193 "num_base_bdevs_operational": 2, 00:13:42.193 "process": { 00:13:42.193 "type": "rebuild", 00:13:42.193 "target": "spare", 00:13:42.193 "progress": { 00:13:42.193 "blocks": 47104, 00:13:42.193 "percent": 71 00:13:42.193 } 00:13:42.193 }, 00:13:42.193 "base_bdevs_list": [ 00:13:42.193 { 00:13:42.193 "name": "spare", 00:13:42.193 "uuid": "c28d77fe-a70e-5a3c-9bb5-a376cc7a99d0", 00:13:42.193 "is_configured": true, 00:13:42.194 "data_offset": 0, 00:13:42.194 "data_size": 65536 00:13:42.194 }, 00:13:42.194 { 00:13:42.194 "name": "BaseBdev2", 00:13:42.194 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:42.194 "is_configured": true, 00:13:42.194 "data_offset": 0, 00:13:42.194 "data_size": 65536 00:13:42.194 } 00:13:42.194 ] 00:13:42.194 }' 00:13:42.194 11:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.194 11:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.194 11:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.452 11:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.452 11:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.019 [2024-11-15 11:25:25.882450] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:43.019 [2024-11-15 11:25:25.882780] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:43.019 [2024-11-15 11:25:25.882868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.277 "name": "raid_bdev1", 00:13:43.277 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:43.277 "strip_size_kb": 0, 00:13:43.277 "state": "online", 00:13:43.277 "raid_level": "raid1", 00:13:43.277 "superblock": false, 00:13:43.277 "num_base_bdevs": 2, 00:13:43.277 "num_base_bdevs_discovered": 2, 00:13:43.277 "num_base_bdevs_operational": 2, 00:13:43.277 "base_bdevs_list": [ 00:13:43.277 { 00:13:43.277 "name": "spare", 00:13:43.277 "uuid": "c28d77fe-a70e-5a3c-9bb5-a376cc7a99d0", 00:13:43.277 "is_configured": true, 00:13:43.277 "data_offset": 0, 00:13:43.277 "data_size": 65536 00:13:43.277 }, 00:13:43.277 { 00:13:43.277 "name": "BaseBdev2", 00:13:43.277 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:43.277 "is_configured": true, 00:13:43.277 "data_offset": 0, 00:13:43.277 "data_size": 65536 00:13:43.277 } 00:13:43.277 ] 00:13:43.277 }' 00:13:43.277 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.536 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.537 "name": "raid_bdev1", 00:13:43.537 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:43.537 "strip_size_kb": 0, 00:13:43.537 "state": "online", 00:13:43.537 "raid_level": "raid1", 00:13:43.537 "superblock": false, 00:13:43.537 "num_base_bdevs": 2, 00:13:43.537 "num_base_bdevs_discovered": 2, 00:13:43.537 "num_base_bdevs_operational": 2, 00:13:43.537 "base_bdevs_list": [ 00:13:43.537 { 00:13:43.537 "name": "spare", 00:13:43.537 "uuid": "c28d77fe-a70e-5a3c-9bb5-a376cc7a99d0", 00:13:43.537 "is_configured": true, 00:13:43.537 "data_offset": 0, 00:13:43.537 "data_size": 65536 00:13:43.537 }, 00:13:43.537 { 00:13:43.537 "name": "BaseBdev2", 00:13:43.537 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:43.537 "is_configured": true, 00:13:43.537 "data_offset": 0, 00:13:43.537 "data_size": 65536 00:13:43.537 } 00:13:43.537 ] 00:13:43.537 }' 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.537 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.796 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.796 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.796 "name": "raid_bdev1", 00:13:43.796 "uuid": "7002e838-8e63-499e-8f18-19e6e7273c39", 00:13:43.796 "strip_size_kb": 0, 00:13:43.796 "state": "online", 00:13:43.796 "raid_level": "raid1", 00:13:43.796 "superblock": false, 00:13:43.796 "num_base_bdevs": 2, 00:13:43.796 "num_base_bdevs_discovered": 2, 00:13:43.796 "num_base_bdevs_operational": 2, 00:13:43.796 "base_bdevs_list": [ 00:13:43.796 { 00:13:43.796 "name": "spare", 00:13:43.796 "uuid": "c28d77fe-a70e-5a3c-9bb5-a376cc7a99d0", 00:13:43.796 "is_configured": true, 00:13:43.796 "data_offset": 0, 00:13:43.796 "data_size": 65536 00:13:43.796 }, 00:13:43.796 { 00:13:43.796 "name": "BaseBdev2", 00:13:43.796 "uuid": "ee7c60e3-59e0-5e56-ba58-07c0a2b027e2", 00:13:43.796 "is_configured": true, 00:13:43.796 "data_offset": 0, 00:13:43.796 "data_size": 65536 00:13:43.796 } 00:13:43.796 ] 00:13:43.796 }' 00:13:43.796 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.796 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.055 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:44.055 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.055 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.055 [2024-11-15 11:25:26.991120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.055 [2024-11-15 11:25:26.991368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.055 [2024-11-15 11:25:26.991508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.055 [2024-11-15 11:25:26.991628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.055 [2024-11-15 11:25:26.991647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:44.055 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.055 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.055 11:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:44.055 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.055 11:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.314 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:44.574 /dev/nbd0 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.574 1+0 records in 00:13:44.574 1+0 records out 00:13:44.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031777 s, 12.9 MB/s 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.574 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:44.832 /dev/nbd1 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:44.832 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.832 1+0 records in 00:13:44.832 1+0 records out 00:13:44.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413746 s, 9.9 MB/s 00:13:44.833 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.833 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:44.833 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.833 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:44.833 11:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:44.833 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.833 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.833 11:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:45.091 11:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:45.091 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.091 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:45.091 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.091 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:45.091 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.091 11:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.350 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75354 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75354 ']' 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75354 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75354 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:45.609 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:45.609 killing process with pid 75354 00:13:45.610 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75354' 00:13:45.610 Received shutdown signal, test time was about 60.000000 seconds 00:13:45.610 00:13:45.610 Latency(us) 00:13:45.610 [2024-11-15T11:25:28.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.610 [2024-11-15T11:25:28.560Z] =================================================================================================================== 00:13:45.610 [2024-11-15T11:25:28.560Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:45.610 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75354 00:13:45.610 [2024-11-15 11:25:28.545669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.610 11:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75354 00:13:45.869 [2024-11-15 11:25:28.776926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:47.247 00:13:47.247 real 0m18.596s 00:13:47.247 user 0m20.825s 00:13:47.247 sys 0m3.627s 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.247 ************************************ 00:13:47.247 END TEST raid_rebuild_test 00:13:47.247 ************************************ 00:13:47.247 11:25:29 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:47.247 11:25:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:47.247 11:25:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:47.247 11:25:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.247 ************************************ 00:13:47.247 START TEST raid_rebuild_test_sb 00:13:47.247 ************************************ 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:47.247 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75807 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75807 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75807 ']' 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:47.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:47.248 11:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.248 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:47.248 Zero copy mechanism will not be used. 00:13:47.248 [2024-11-15 11:25:29.976729] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:13:47.248 [2024-11-15 11:25:29.976923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75807 ] 00:13:47.248 [2024-11-15 11:25:30.161842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.518 [2024-11-15 11:25:30.292870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.777 [2024-11-15 11:25:30.514088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.777 [2024-11-15 11:25:30.514137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.037 BaseBdev1_malloc 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.037 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.037 [2024-11-15 11:25:30.984083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:48.037 [2024-11-15 11:25:30.984160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.037 [2024-11-15 11:25:30.984246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:48.037 [2024-11-15 11:25:30.984268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.336 [2024-11-15 11:25:30.987554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.336 [2024-11-15 11:25:30.987809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:48.336 BaseBdev1 00:13:48.336 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.336 11:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:48.336 11:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:48.336 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.336 11:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.336 BaseBdev2_malloc 00:13:48.336 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.336 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:48.336 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.336 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.336 [2024-11-15 11:25:31.039491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:48.337 [2024-11-15 11:25:31.039743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.337 [2024-11-15 11:25:31.039786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:48.337 [2024-11-15 11:25:31.039806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.337 [2024-11-15 11:25:31.042790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.337 [2024-11-15 11:25:31.042989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:48.337 BaseBdev2 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.337 spare_malloc 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.337 spare_delay 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.337 [2024-11-15 11:25:31.116115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.337 [2024-11-15 11:25:31.116249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.337 [2024-11-15 11:25:31.116281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:48.337 [2024-11-15 11:25:31.116299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.337 [2024-11-15 11:25:31.119248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.337 [2024-11-15 11:25:31.119330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.337 spare 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.337 [2024-11-15 11:25:31.124258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.337 [2024-11-15 11:25:31.126797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.337 [2024-11-15 11:25:31.127224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:48.337 [2024-11-15 11:25:31.127254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:48.337 [2024-11-15 11:25:31.127602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:48.337 [2024-11-15 11:25:31.127804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:48.337 [2024-11-15 11:25:31.127819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:48.337 [2024-11-15 11:25:31.127975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.337 "name": "raid_bdev1", 00:13:48.337 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:13:48.337 "strip_size_kb": 0, 00:13:48.337 "state": "online", 00:13:48.337 "raid_level": "raid1", 00:13:48.337 "superblock": true, 00:13:48.337 "num_base_bdevs": 2, 00:13:48.337 "num_base_bdevs_discovered": 2, 00:13:48.337 "num_base_bdevs_operational": 2, 00:13:48.337 "base_bdevs_list": [ 00:13:48.337 { 00:13:48.337 "name": "BaseBdev1", 00:13:48.337 "uuid": "b4075f94-1ef0-5244-bd54-c0cabc8de224", 00:13:48.337 "is_configured": true, 00:13:48.337 "data_offset": 2048, 00:13:48.337 "data_size": 63488 00:13:48.337 }, 00:13:48.337 { 00:13:48.337 "name": "BaseBdev2", 00:13:48.337 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:13:48.337 "is_configured": true, 00:13:48.337 "data_offset": 2048, 00:13:48.337 "data_size": 63488 00:13:48.337 } 00:13:48.337 ] 00:13:48.337 }' 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.337 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:48.905 [2024-11-15 11:25:31.660856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.905 11:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:49.165 [2024-11-15 11:25:32.004647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:49.165 /dev/nbd0 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:49.165 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.165 1+0 records in 00:13:49.165 1+0 records out 00:13:49.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289313 s, 14.2 MB/s 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:49.166 11:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:55.735 63488+0 records in 00:13:55.736 63488+0 records out 00:13:55.736 32505856 bytes (33 MB, 31 MiB) copied, 5.89986 s, 5.5 MB/s 00:13:55.736 11:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:55.736 11:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.736 11:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:55.736 11:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.736 11:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:55.736 11:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.736 11:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:55.736 [2024-11-15 11:25:38.237450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.736 [2024-11-15 11:25:38.269849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.736 "name": "raid_bdev1", 00:13:55.736 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:13:55.736 "strip_size_kb": 0, 00:13:55.736 "state": "online", 00:13:55.736 "raid_level": "raid1", 00:13:55.736 "superblock": true, 00:13:55.736 "num_base_bdevs": 2, 00:13:55.736 "num_base_bdevs_discovered": 1, 00:13:55.736 "num_base_bdevs_operational": 1, 00:13:55.736 "base_bdevs_list": [ 00:13:55.736 { 00:13:55.736 "name": null, 00:13:55.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.736 "is_configured": false, 00:13:55.736 "data_offset": 0, 00:13:55.736 "data_size": 63488 00:13:55.736 }, 00:13:55.736 { 00:13:55.736 "name": "BaseBdev2", 00:13:55.736 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:13:55.736 "is_configured": true, 00:13:55.736 "data_offset": 2048, 00:13:55.736 "data_size": 63488 00:13:55.736 } 00:13:55.736 ] 00:13:55.736 }' 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.736 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.994 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.994 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.994 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.994 [2024-11-15 11:25:38.806081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.994 [2024-11-15 11:25:38.823513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:55.994 11:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.994 11:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:55.994 [2024-11-15 11:25:38.826287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.931 11:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.191 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.191 "name": "raid_bdev1", 00:13:57.191 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:13:57.191 "strip_size_kb": 0, 00:13:57.191 "state": "online", 00:13:57.191 "raid_level": "raid1", 00:13:57.191 "superblock": true, 00:13:57.191 "num_base_bdevs": 2, 00:13:57.191 "num_base_bdevs_discovered": 2, 00:13:57.191 "num_base_bdevs_operational": 2, 00:13:57.191 "process": { 00:13:57.191 "type": "rebuild", 00:13:57.191 "target": "spare", 00:13:57.191 "progress": { 00:13:57.191 "blocks": 20480, 00:13:57.191 "percent": 32 00:13:57.191 } 00:13:57.191 }, 00:13:57.191 "base_bdevs_list": [ 00:13:57.191 { 00:13:57.191 "name": "spare", 00:13:57.191 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:13:57.191 "is_configured": true, 00:13:57.191 "data_offset": 2048, 00:13:57.191 "data_size": 63488 00:13:57.191 }, 00:13:57.191 { 00:13:57.191 "name": "BaseBdev2", 00:13:57.191 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:13:57.191 "is_configured": true, 00:13:57.191 "data_offset": 2048, 00:13:57.191 "data_size": 63488 00:13:57.191 } 00:13:57.191 ] 00:13:57.191 }' 00:13:57.191 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.191 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.191 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.191 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.191 11:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:57.191 11:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.191 11:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.191 [2024-11-15 11:25:39.987832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.191 [2024-11-15 11:25:40.036608] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:57.191 [2024-11-15 11:25:40.036833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.191 [2024-11-15 11:25:40.036959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.191 [2024-11-15 11:25:40.037021] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.191 "name": "raid_bdev1", 00:13:57.191 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:13:57.191 "strip_size_kb": 0, 00:13:57.191 "state": "online", 00:13:57.191 "raid_level": "raid1", 00:13:57.191 "superblock": true, 00:13:57.191 "num_base_bdevs": 2, 00:13:57.191 "num_base_bdevs_discovered": 1, 00:13:57.191 "num_base_bdevs_operational": 1, 00:13:57.191 "base_bdevs_list": [ 00:13:57.191 { 00:13:57.191 "name": null, 00:13:57.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.191 "is_configured": false, 00:13:57.191 "data_offset": 0, 00:13:57.191 "data_size": 63488 00:13:57.191 }, 00:13:57.191 { 00:13:57.191 "name": "BaseBdev2", 00:13:57.191 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:13:57.191 "is_configured": true, 00:13:57.191 "data_offset": 2048, 00:13:57.191 "data_size": 63488 00:13:57.191 } 00:13:57.191 ] 00:13:57.191 }' 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.191 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.759 "name": "raid_bdev1", 00:13:57.759 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:13:57.759 "strip_size_kb": 0, 00:13:57.759 "state": "online", 00:13:57.759 "raid_level": "raid1", 00:13:57.759 "superblock": true, 00:13:57.759 "num_base_bdevs": 2, 00:13:57.759 "num_base_bdevs_discovered": 1, 00:13:57.759 "num_base_bdevs_operational": 1, 00:13:57.759 "base_bdevs_list": [ 00:13:57.759 { 00:13:57.759 "name": null, 00:13:57.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.759 "is_configured": false, 00:13:57.759 "data_offset": 0, 00:13:57.759 "data_size": 63488 00:13:57.759 }, 00:13:57.759 { 00:13:57.759 "name": "BaseBdev2", 00:13:57.759 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:13:57.759 "is_configured": true, 00:13:57.759 "data_offset": 2048, 00:13:57.759 "data_size": 63488 00:13:57.759 } 00:13:57.759 ] 00:13:57.759 }' 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.759 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.018 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.018 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.018 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.018 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.018 [2024-11-15 11:25:40.744706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.018 [2024-11-15 11:25:40.760237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:58.018 11:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.018 11:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:58.019 [2024-11-15 11:25:40.762940] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.955 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.955 "name": "raid_bdev1", 00:13:58.955 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:13:58.955 "strip_size_kb": 0, 00:13:58.955 "state": "online", 00:13:58.955 "raid_level": "raid1", 00:13:58.955 "superblock": true, 00:13:58.955 "num_base_bdevs": 2, 00:13:58.955 "num_base_bdevs_discovered": 2, 00:13:58.956 "num_base_bdevs_operational": 2, 00:13:58.956 "process": { 00:13:58.956 "type": "rebuild", 00:13:58.956 "target": "spare", 00:13:58.956 "progress": { 00:13:58.956 "blocks": 20480, 00:13:58.956 "percent": 32 00:13:58.956 } 00:13:58.956 }, 00:13:58.956 "base_bdevs_list": [ 00:13:58.956 { 00:13:58.956 "name": "spare", 00:13:58.956 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:13:58.956 "is_configured": true, 00:13:58.956 "data_offset": 2048, 00:13:58.956 "data_size": 63488 00:13:58.956 }, 00:13:58.956 { 00:13:58.956 "name": "BaseBdev2", 00:13:58.956 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:13:58.956 "is_configured": true, 00:13:58.956 "data_offset": 2048, 00:13:58.956 "data_size": 63488 00:13:58.956 } 00:13:58.956 ] 00:13:58.956 }' 00:13:58.956 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.956 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.956 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:59.215 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=418 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.215 "name": "raid_bdev1", 00:13:59.215 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:13:59.215 "strip_size_kb": 0, 00:13:59.215 "state": "online", 00:13:59.215 "raid_level": "raid1", 00:13:59.215 "superblock": true, 00:13:59.215 "num_base_bdevs": 2, 00:13:59.215 "num_base_bdevs_discovered": 2, 00:13:59.215 "num_base_bdevs_operational": 2, 00:13:59.215 "process": { 00:13:59.215 "type": "rebuild", 00:13:59.215 "target": "spare", 00:13:59.215 "progress": { 00:13:59.215 "blocks": 22528, 00:13:59.215 "percent": 35 00:13:59.215 } 00:13:59.215 }, 00:13:59.215 "base_bdevs_list": [ 00:13:59.215 { 00:13:59.215 "name": "spare", 00:13:59.215 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:13:59.215 "is_configured": true, 00:13:59.215 "data_offset": 2048, 00:13:59.215 "data_size": 63488 00:13:59.215 }, 00:13:59.215 { 00:13:59.215 "name": "BaseBdev2", 00:13:59.215 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:13:59.215 "is_configured": true, 00:13:59.215 "data_offset": 2048, 00:13:59.215 "data_size": 63488 00:13:59.215 } 00:13:59.215 ] 00:13:59.215 }' 00:13:59.215 11:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.215 11:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.215 11:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.215 11:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.215 11:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.152 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.152 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.152 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.152 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.152 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.152 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.411 "name": "raid_bdev1", 00:14:00.411 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:00.411 "strip_size_kb": 0, 00:14:00.411 "state": "online", 00:14:00.411 "raid_level": "raid1", 00:14:00.411 "superblock": true, 00:14:00.411 "num_base_bdevs": 2, 00:14:00.411 "num_base_bdevs_discovered": 2, 00:14:00.411 "num_base_bdevs_operational": 2, 00:14:00.411 "process": { 00:14:00.411 "type": "rebuild", 00:14:00.411 "target": "spare", 00:14:00.411 "progress": { 00:14:00.411 "blocks": 47104, 00:14:00.411 "percent": 74 00:14:00.411 } 00:14:00.411 }, 00:14:00.411 "base_bdevs_list": [ 00:14:00.411 { 00:14:00.411 "name": "spare", 00:14:00.411 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:14:00.411 "is_configured": true, 00:14:00.411 "data_offset": 2048, 00:14:00.411 "data_size": 63488 00:14:00.411 }, 00:14:00.411 { 00:14:00.411 "name": "BaseBdev2", 00:14:00.411 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:00.411 "is_configured": true, 00:14:00.411 "data_offset": 2048, 00:14:00.411 "data_size": 63488 00:14:00.411 } 00:14:00.411 ] 00:14:00.411 }' 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.411 11:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.979 [2024-11-15 11:25:43.891657] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:00.979 [2024-11-15 11:25:43.891761] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:00.979 [2024-11-15 11:25:43.891914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.547 "name": "raid_bdev1", 00:14:01.547 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:01.547 "strip_size_kb": 0, 00:14:01.547 "state": "online", 00:14:01.547 "raid_level": "raid1", 00:14:01.547 "superblock": true, 00:14:01.547 "num_base_bdevs": 2, 00:14:01.547 "num_base_bdevs_discovered": 2, 00:14:01.547 "num_base_bdevs_operational": 2, 00:14:01.547 "base_bdevs_list": [ 00:14:01.547 { 00:14:01.547 "name": "spare", 00:14:01.547 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:14:01.547 "is_configured": true, 00:14:01.547 "data_offset": 2048, 00:14:01.547 "data_size": 63488 00:14:01.547 }, 00:14:01.547 { 00:14:01.547 "name": "BaseBdev2", 00:14:01.547 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:01.547 "is_configured": true, 00:14:01.547 "data_offset": 2048, 00:14:01.547 "data_size": 63488 00:14:01.547 } 00:14:01.547 ] 00:14:01.547 }' 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.547 "name": "raid_bdev1", 00:14:01.547 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:01.547 "strip_size_kb": 0, 00:14:01.547 "state": "online", 00:14:01.547 "raid_level": "raid1", 00:14:01.547 "superblock": true, 00:14:01.547 "num_base_bdevs": 2, 00:14:01.547 "num_base_bdevs_discovered": 2, 00:14:01.547 "num_base_bdevs_operational": 2, 00:14:01.547 "base_bdevs_list": [ 00:14:01.547 { 00:14:01.547 "name": "spare", 00:14:01.547 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:14:01.547 "is_configured": true, 00:14:01.547 "data_offset": 2048, 00:14:01.547 "data_size": 63488 00:14:01.547 }, 00:14:01.547 { 00:14:01.547 "name": "BaseBdev2", 00:14:01.547 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:01.547 "is_configured": true, 00:14:01.547 "data_offset": 2048, 00:14:01.547 "data_size": 63488 00:14:01.547 } 00:14:01.547 ] 00:14:01.547 }' 00:14:01.547 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.807 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.807 "name": "raid_bdev1", 00:14:01.807 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:01.807 "strip_size_kb": 0, 00:14:01.807 "state": "online", 00:14:01.807 "raid_level": "raid1", 00:14:01.807 "superblock": true, 00:14:01.807 "num_base_bdevs": 2, 00:14:01.807 "num_base_bdevs_discovered": 2, 00:14:01.807 "num_base_bdevs_operational": 2, 00:14:01.807 "base_bdevs_list": [ 00:14:01.807 { 00:14:01.807 "name": "spare", 00:14:01.807 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:14:01.807 "is_configured": true, 00:14:01.807 "data_offset": 2048, 00:14:01.807 "data_size": 63488 00:14:01.807 }, 00:14:01.807 { 00:14:01.807 "name": "BaseBdev2", 00:14:01.807 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:01.807 "is_configured": true, 00:14:01.807 "data_offset": 2048, 00:14:01.807 "data_size": 63488 00:14:01.807 } 00:14:01.807 ] 00:14:01.807 }' 00:14:01.808 11:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.808 11:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.376 [2024-11-15 11:25:45.089029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.376 [2024-11-15 11:25:45.089254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.376 [2024-11-15 11:25:45.089393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.376 [2024-11-15 11:25:45.089500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.376 [2024-11-15 11:25:45.089520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:02.376 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:02.635 /dev/nbd0 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.635 1+0 records in 00:14:02.635 1+0 records out 00:14:02.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243756 s, 16.8 MB/s 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:02.635 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:02.893 /dev/nbd1 00:14:02.893 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.152 1+0 records in 00:14:03.152 1+0 records out 00:14:03.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576726 s, 7.1 MB/s 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.152 11:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:03.152 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:03.152 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.152 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:03.152 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.152 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:03.152 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.152 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.718 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.977 [2024-11-15 11:25:46.766820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:03.977 [2024-11-15 11:25:46.767055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.977 [2024-11-15 11:25:46.767107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:03.977 [2024-11-15 11:25:46.767128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.977 [2024-11-15 11:25:46.770811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.977 [2024-11-15 11:25:46.770896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:03.977 [2024-11-15 11:25:46.771041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:03.977 [2024-11-15 11:25:46.771110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.977 [2024-11-15 11:25:46.771514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.977 spare 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.977 [2024-11-15 11:25:46.871641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:03.977 [2024-11-15 11:25:46.871684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:03.977 [2024-11-15 11:25:46.872042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:03.977 [2024-11-15 11:25:46.872252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:03.977 [2024-11-15 11:25:46.872271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:03.977 [2024-11-15 11:25:46.872571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.977 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.235 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.235 "name": "raid_bdev1", 00:14:04.235 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:04.235 "strip_size_kb": 0, 00:14:04.235 "state": "online", 00:14:04.235 "raid_level": "raid1", 00:14:04.235 "superblock": true, 00:14:04.235 "num_base_bdevs": 2, 00:14:04.235 "num_base_bdevs_discovered": 2, 00:14:04.235 "num_base_bdevs_operational": 2, 00:14:04.235 "base_bdevs_list": [ 00:14:04.235 { 00:14:04.235 "name": "spare", 00:14:04.235 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:14:04.235 "is_configured": true, 00:14:04.235 "data_offset": 2048, 00:14:04.236 "data_size": 63488 00:14:04.236 }, 00:14:04.236 { 00:14:04.236 "name": "BaseBdev2", 00:14:04.236 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:04.236 "is_configured": true, 00:14:04.236 "data_offset": 2048, 00:14:04.236 "data_size": 63488 00:14:04.236 } 00:14:04.236 ] 00:14:04.236 }' 00:14:04.236 11:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.236 11:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.494 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.752 "name": "raid_bdev1", 00:14:04.752 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:04.752 "strip_size_kb": 0, 00:14:04.752 "state": "online", 00:14:04.752 "raid_level": "raid1", 00:14:04.752 "superblock": true, 00:14:04.752 "num_base_bdevs": 2, 00:14:04.752 "num_base_bdevs_discovered": 2, 00:14:04.752 "num_base_bdevs_operational": 2, 00:14:04.752 "base_bdevs_list": [ 00:14:04.752 { 00:14:04.752 "name": "spare", 00:14:04.752 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:14:04.752 "is_configured": true, 00:14:04.752 "data_offset": 2048, 00:14:04.752 "data_size": 63488 00:14:04.752 }, 00:14:04.752 { 00:14:04.752 "name": "BaseBdev2", 00:14:04.752 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:04.752 "is_configured": true, 00:14:04.752 "data_offset": 2048, 00:14:04.752 "data_size": 63488 00:14:04.752 } 00:14:04.752 ] 00:14:04.752 }' 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.752 [2024-11-15 11:25:47.615667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.752 "name": "raid_bdev1", 00:14:04.752 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:04.752 "strip_size_kb": 0, 00:14:04.752 "state": "online", 00:14:04.752 "raid_level": "raid1", 00:14:04.752 "superblock": true, 00:14:04.752 "num_base_bdevs": 2, 00:14:04.752 "num_base_bdevs_discovered": 1, 00:14:04.752 "num_base_bdevs_operational": 1, 00:14:04.752 "base_bdevs_list": [ 00:14:04.752 { 00:14:04.752 "name": null, 00:14:04.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.752 "is_configured": false, 00:14:04.752 "data_offset": 0, 00:14:04.752 "data_size": 63488 00:14:04.752 }, 00:14:04.752 { 00:14:04.752 "name": "BaseBdev2", 00:14:04.752 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:04.752 "is_configured": true, 00:14:04.752 "data_offset": 2048, 00:14:04.752 "data_size": 63488 00:14:04.752 } 00:14:04.752 ] 00:14:04.752 }' 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.752 11:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.320 11:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:05.320 11:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.320 11:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.320 [2024-11-15 11:25:48.135917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.320 [2024-11-15 11:25:48.136414] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:05.320 [2024-11-15 11:25:48.136451] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:05.320 [2024-11-15 11:25:48.136550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.320 [2024-11-15 11:25:48.152656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:05.320 11:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.320 11:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:05.320 [2024-11-15 11:25:48.155534] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.256 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.257 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.515 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.515 "name": "raid_bdev1", 00:14:06.515 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:06.515 "strip_size_kb": 0, 00:14:06.515 "state": "online", 00:14:06.515 "raid_level": "raid1", 00:14:06.515 "superblock": true, 00:14:06.515 "num_base_bdevs": 2, 00:14:06.515 "num_base_bdevs_discovered": 2, 00:14:06.515 "num_base_bdevs_operational": 2, 00:14:06.515 "process": { 00:14:06.515 "type": "rebuild", 00:14:06.515 "target": "spare", 00:14:06.515 "progress": { 00:14:06.515 "blocks": 20480, 00:14:06.516 "percent": 32 00:14:06.516 } 00:14:06.516 }, 00:14:06.516 "base_bdevs_list": [ 00:14:06.516 { 00:14:06.516 "name": "spare", 00:14:06.516 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:14:06.516 "is_configured": true, 00:14:06.516 "data_offset": 2048, 00:14:06.516 "data_size": 63488 00:14:06.516 }, 00:14:06.516 { 00:14:06.516 "name": "BaseBdev2", 00:14:06.516 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:06.516 "is_configured": true, 00:14:06.516 "data_offset": 2048, 00:14:06.516 "data_size": 63488 00:14:06.516 } 00:14:06.516 ] 00:14:06.516 }' 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.516 [2024-11-15 11:25:49.328738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.516 [2024-11-15 11:25:49.366250] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.516 [2024-11-15 11:25:49.366507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.516 [2024-11-15 11:25:49.366653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.516 [2024-11-15 11:25:49.366709] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.516 "name": "raid_bdev1", 00:14:06.516 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:06.516 "strip_size_kb": 0, 00:14:06.516 "state": "online", 00:14:06.516 "raid_level": "raid1", 00:14:06.516 "superblock": true, 00:14:06.516 "num_base_bdevs": 2, 00:14:06.516 "num_base_bdevs_discovered": 1, 00:14:06.516 "num_base_bdevs_operational": 1, 00:14:06.516 "base_bdevs_list": [ 00:14:06.516 { 00:14:06.516 "name": null, 00:14:06.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.516 "is_configured": false, 00:14:06.516 "data_offset": 0, 00:14:06.516 "data_size": 63488 00:14:06.516 }, 00:14:06.516 { 00:14:06.516 "name": "BaseBdev2", 00:14:06.516 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:06.516 "is_configured": true, 00:14:06.516 "data_offset": 2048, 00:14:06.516 "data_size": 63488 00:14:06.516 } 00:14:06.516 ] 00:14:06.516 }' 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.516 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.084 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.084 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.084 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.084 [2024-11-15 11:25:49.923862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.084 [2024-11-15 11:25:49.923962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.084 [2024-11-15 11:25:49.923996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:07.084 [2024-11-15 11:25:49.924031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.084 [2024-11-15 11:25:49.924750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.084 [2024-11-15 11:25:49.924788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.084 [2024-11-15 11:25:49.924929] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:07.084 [2024-11-15 11:25:49.924954] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.084 [2024-11-15 11:25:49.924969] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:07.084 [2024-11-15 11:25:49.925009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.084 [2024-11-15 11:25:49.940223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:07.084 spare 00:14:07.084 11:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.084 11:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:07.084 [2024-11-15 11:25:49.943461] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.020 11:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.280 11:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.280 "name": "raid_bdev1", 00:14:08.280 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:08.280 "strip_size_kb": 0, 00:14:08.280 "state": "online", 00:14:08.280 "raid_level": "raid1", 00:14:08.280 "superblock": true, 00:14:08.280 "num_base_bdevs": 2, 00:14:08.280 "num_base_bdevs_discovered": 2, 00:14:08.280 "num_base_bdevs_operational": 2, 00:14:08.280 "process": { 00:14:08.280 "type": "rebuild", 00:14:08.280 "target": "spare", 00:14:08.280 "progress": { 00:14:08.280 "blocks": 20480, 00:14:08.280 "percent": 32 00:14:08.280 } 00:14:08.280 }, 00:14:08.280 "base_bdevs_list": [ 00:14:08.280 { 00:14:08.280 "name": "spare", 00:14:08.280 "uuid": "72b2f4cf-648c-5309-9c09-4555b2d08ac7", 00:14:08.280 "is_configured": true, 00:14:08.280 "data_offset": 2048, 00:14:08.280 "data_size": 63488 00:14:08.280 }, 00:14:08.280 { 00:14:08.280 "name": "BaseBdev2", 00:14:08.280 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:08.280 "is_configured": true, 00:14:08.280 "data_offset": 2048, 00:14:08.280 "data_size": 63488 00:14:08.280 } 00:14:08.280 ] 00:14:08.280 }' 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.280 [2024-11-15 11:25:51.113197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.280 [2024-11-15 11:25:51.153846] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.280 [2024-11-15 11:25:51.153931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.280 [2024-11-15 11:25:51.153958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.280 [2024-11-15 11:25:51.153969] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.280 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.539 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.539 "name": "raid_bdev1", 00:14:08.539 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:08.539 "strip_size_kb": 0, 00:14:08.539 "state": "online", 00:14:08.539 "raid_level": "raid1", 00:14:08.539 "superblock": true, 00:14:08.539 "num_base_bdevs": 2, 00:14:08.539 "num_base_bdevs_discovered": 1, 00:14:08.539 "num_base_bdevs_operational": 1, 00:14:08.539 "base_bdevs_list": [ 00:14:08.539 { 00:14:08.539 "name": null, 00:14:08.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.539 "is_configured": false, 00:14:08.539 "data_offset": 0, 00:14:08.539 "data_size": 63488 00:14:08.539 }, 00:14:08.539 { 00:14:08.539 "name": "BaseBdev2", 00:14:08.539 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:08.539 "is_configured": true, 00:14:08.539 "data_offset": 2048, 00:14:08.539 "data_size": 63488 00:14:08.539 } 00:14:08.539 ] 00:14:08.539 }' 00:14:08.539 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.539 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.798 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.057 "name": "raid_bdev1", 00:14:09.057 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:09.057 "strip_size_kb": 0, 00:14:09.057 "state": "online", 00:14:09.057 "raid_level": "raid1", 00:14:09.057 "superblock": true, 00:14:09.057 "num_base_bdevs": 2, 00:14:09.057 "num_base_bdevs_discovered": 1, 00:14:09.057 "num_base_bdevs_operational": 1, 00:14:09.057 "base_bdevs_list": [ 00:14:09.057 { 00:14:09.057 "name": null, 00:14:09.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.057 "is_configured": false, 00:14:09.057 "data_offset": 0, 00:14:09.057 "data_size": 63488 00:14:09.057 }, 00:14:09.057 { 00:14:09.057 "name": "BaseBdev2", 00:14:09.057 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:09.057 "is_configured": true, 00:14:09.057 "data_offset": 2048, 00:14:09.057 "data_size": 63488 00:14:09.057 } 00:14:09.057 ] 00:14:09.057 }' 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.057 [2024-11-15 11:25:51.888766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:09.057 [2024-11-15 11:25:51.888847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.057 [2024-11-15 11:25:51.888890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:09.057 [2024-11-15 11:25:51.888916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.057 [2024-11-15 11:25:51.889575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.057 [2024-11-15 11:25:51.889605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.057 [2024-11-15 11:25:51.889742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:09.057 [2024-11-15 11:25:51.889764] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:09.057 [2024-11-15 11:25:51.889778] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:09.057 [2024-11-15 11:25:51.889792] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:09.057 BaseBdev1 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.057 11:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.991 11:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.249 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.249 "name": "raid_bdev1", 00:14:10.249 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:10.249 "strip_size_kb": 0, 00:14:10.249 "state": "online", 00:14:10.249 "raid_level": "raid1", 00:14:10.249 "superblock": true, 00:14:10.249 "num_base_bdevs": 2, 00:14:10.249 "num_base_bdevs_discovered": 1, 00:14:10.249 "num_base_bdevs_operational": 1, 00:14:10.249 "base_bdevs_list": [ 00:14:10.249 { 00:14:10.249 "name": null, 00:14:10.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.249 "is_configured": false, 00:14:10.249 "data_offset": 0, 00:14:10.249 "data_size": 63488 00:14:10.249 }, 00:14:10.249 { 00:14:10.249 "name": "BaseBdev2", 00:14:10.249 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:10.249 "is_configured": true, 00:14:10.249 "data_offset": 2048, 00:14:10.249 "data_size": 63488 00:14:10.249 } 00:14:10.249 ] 00:14:10.249 }' 00:14:10.249 11:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.249 11:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.509 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.768 "name": "raid_bdev1", 00:14:10.768 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:10.768 "strip_size_kb": 0, 00:14:10.768 "state": "online", 00:14:10.768 "raid_level": "raid1", 00:14:10.768 "superblock": true, 00:14:10.768 "num_base_bdevs": 2, 00:14:10.768 "num_base_bdevs_discovered": 1, 00:14:10.768 "num_base_bdevs_operational": 1, 00:14:10.768 "base_bdevs_list": [ 00:14:10.768 { 00:14:10.768 "name": null, 00:14:10.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.768 "is_configured": false, 00:14:10.768 "data_offset": 0, 00:14:10.768 "data_size": 63488 00:14:10.768 }, 00:14:10.768 { 00:14:10.768 "name": "BaseBdev2", 00:14:10.768 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:10.768 "is_configured": true, 00:14:10.768 "data_offset": 2048, 00:14:10.768 "data_size": 63488 00:14:10.768 } 00:14:10.768 ] 00:14:10.768 }' 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.768 [2024-11-15 11:25:53.593521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.768 [2024-11-15 11:25:53.593842] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:10.768 [2024-11-15 11:25:53.593885] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:10.768 request: 00:14:10.768 { 00:14:10.768 "base_bdev": "BaseBdev1", 00:14:10.768 "raid_bdev": "raid_bdev1", 00:14:10.768 "method": "bdev_raid_add_base_bdev", 00:14:10.768 "req_id": 1 00:14:10.768 } 00:14:10.768 Got JSON-RPC error response 00:14:10.768 response: 00:14:10.768 { 00:14:10.768 "code": -22, 00:14:10.768 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:10.768 } 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.768 11:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.703 11:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.960 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.960 "name": "raid_bdev1", 00:14:11.960 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:11.960 "strip_size_kb": 0, 00:14:11.960 "state": "online", 00:14:11.960 "raid_level": "raid1", 00:14:11.960 "superblock": true, 00:14:11.960 "num_base_bdevs": 2, 00:14:11.960 "num_base_bdevs_discovered": 1, 00:14:11.960 "num_base_bdevs_operational": 1, 00:14:11.960 "base_bdevs_list": [ 00:14:11.960 { 00:14:11.960 "name": null, 00:14:11.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.960 "is_configured": false, 00:14:11.960 "data_offset": 0, 00:14:11.960 "data_size": 63488 00:14:11.960 }, 00:14:11.960 { 00:14:11.960 "name": "BaseBdev2", 00:14:11.960 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:11.960 "is_configured": true, 00:14:11.960 "data_offset": 2048, 00:14:11.960 "data_size": 63488 00:14:11.960 } 00:14:11.960 ] 00:14:11.960 }' 00:14:11.960 11:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.960 11:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.219 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.478 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.478 "name": "raid_bdev1", 00:14:12.478 "uuid": "7459f557-c8e7-4f2f-9820-52cb75a5a2b5", 00:14:12.478 "strip_size_kb": 0, 00:14:12.478 "state": "online", 00:14:12.478 "raid_level": "raid1", 00:14:12.478 "superblock": true, 00:14:12.478 "num_base_bdevs": 2, 00:14:12.478 "num_base_bdevs_discovered": 1, 00:14:12.478 "num_base_bdevs_operational": 1, 00:14:12.478 "base_bdevs_list": [ 00:14:12.478 { 00:14:12.478 "name": null, 00:14:12.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.478 "is_configured": false, 00:14:12.478 "data_offset": 0, 00:14:12.478 "data_size": 63488 00:14:12.478 }, 00:14:12.478 { 00:14:12.479 "name": "BaseBdev2", 00:14:12.479 "uuid": "dd2cdd12-02e3-5da1-adcb-e97979e4f506", 00:14:12.479 "is_configured": true, 00:14:12.479 "data_offset": 2048, 00:14:12.479 "data_size": 63488 00:14:12.479 } 00:14:12.479 ] 00:14:12.479 }' 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75807 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75807 ']' 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75807 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75807 00:14:12.479 killing process with pid 75807 00:14:12.479 Received shutdown signal, test time was about 60.000000 seconds 00:14:12.479 00:14:12.479 Latency(us) 00:14:12.479 [2024-11-15T11:25:55.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.479 [2024-11-15T11:25:55.429Z] =================================================================================================================== 00:14:12.479 [2024-11-15T11:25:55.429Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75807' 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75807 00:14:12.479 [2024-11-15 11:25:55.328271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.479 11:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75807 00:14:12.479 [2024-11-15 11:25:55.328457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.479 [2024-11-15 11:25:55.328563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.479 [2024-11-15 11:25:55.328599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:12.738 [2024-11-15 11:25:55.586829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.676 11:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:13.676 00:14:13.676 real 0m26.743s 00:14:13.676 user 0m33.083s 00:14:13.676 sys 0m4.102s 00:14:13.676 ************************************ 00:14:13.676 END TEST raid_rebuild_test_sb 00:14:13.676 ************************************ 00:14:13.676 11:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:13.676 11:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.934 11:25:56 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:13.934 11:25:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:13.934 11:25:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:13.934 11:25:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.934 ************************************ 00:14:13.934 START TEST raid_rebuild_test_io 00:14:13.934 ************************************ 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76570 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76570 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76570 ']' 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:13.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:13.934 11:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.934 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:13.934 Zero copy mechanism will not be used. 00:14:13.934 [2024-11-15 11:25:56.781223] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:13.934 [2024-11-15 11:25:56.781422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76570 ] 00:14:14.193 [2024-11-15 11:25:56.968772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.193 [2024-11-15 11:25:57.113220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.452 [2024-11-15 11:25:57.329863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.452 [2024-11-15 11:25:57.329949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.036 BaseBdev1_malloc 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.036 [2024-11-15 11:25:57.817643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:15.036 [2024-11-15 11:25:57.817756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.036 [2024-11-15 11:25:57.817790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:15.036 [2024-11-15 11:25:57.817811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.036 [2024-11-15 11:25:57.820918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.036 [2024-11-15 11:25:57.820986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:15.036 BaseBdev1 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.036 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:15.037 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.037 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.037 BaseBdev2_malloc 00:14:15.037 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.037 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:15.037 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.037 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.037 [2024-11-15 11:25:57.874006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:15.037 [2024-11-15 11:25:57.874127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.037 [2024-11-15 11:25:57.874164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:15.037 [2024-11-15 11:25:57.874201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.037 [2024-11-15 11:25:57.877142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.037 [2024-11-15 11:25:57.877249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:15.037 BaseBdev2 00:14:15.037 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.039 spare_malloc 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.039 spare_delay 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.039 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.039 [2024-11-15 11:25:57.950616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:15.039 [2024-11-15 11:25:57.950882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.039 [2024-11-15 11:25:57.950924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:15.039 [2024-11-15 11:25:57.950944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.039 [2024-11-15 11:25:57.954041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.039 [2024-11-15 11:25:57.954274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:15.039 spare 00:14:15.040 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.040 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:15.040 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.040 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.040 [2024-11-15 11:25:57.962671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.040 [2024-11-15 11:25:57.965341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.040 [2024-11-15 11:25:57.965460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:15.040 [2024-11-15 11:25:57.965482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:15.040 [2024-11-15 11:25:57.965780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:15.040 [2024-11-15 11:25:57.965979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:15.040 [2024-11-15 11:25:57.965997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:15.040 [2024-11-15 11:25:57.966243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.040 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.040 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.041 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.309 11:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.309 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.309 "name": "raid_bdev1", 00:14:15.309 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:15.309 "strip_size_kb": 0, 00:14:15.309 "state": "online", 00:14:15.309 "raid_level": "raid1", 00:14:15.309 "superblock": false, 00:14:15.309 "num_base_bdevs": 2, 00:14:15.309 "num_base_bdevs_discovered": 2, 00:14:15.309 "num_base_bdevs_operational": 2, 00:14:15.309 "base_bdevs_list": [ 00:14:15.309 { 00:14:15.309 "name": "BaseBdev1", 00:14:15.309 "uuid": "8abd44f5-63bd-5801-988d-4bd337a15f9f", 00:14:15.309 "is_configured": true, 00:14:15.309 "data_offset": 0, 00:14:15.309 "data_size": 65536 00:14:15.309 }, 00:14:15.309 { 00:14:15.309 "name": "BaseBdev2", 00:14:15.309 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:15.309 "is_configured": true, 00:14:15.309 "data_offset": 0, 00:14:15.309 "data_size": 65536 00:14:15.309 } 00:14:15.309 ] 00:14:15.309 }' 00:14:15.309 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.309 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:15.567 [2024-11-15 11:25:58.463235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.567 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.827 [2024-11-15 11:25:58.566832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.827 "name": "raid_bdev1", 00:14:15.827 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:15.827 "strip_size_kb": 0, 00:14:15.827 "state": "online", 00:14:15.827 "raid_level": "raid1", 00:14:15.827 "superblock": false, 00:14:15.827 "num_base_bdevs": 2, 00:14:15.827 "num_base_bdevs_discovered": 1, 00:14:15.827 "num_base_bdevs_operational": 1, 00:14:15.827 "base_bdevs_list": [ 00:14:15.827 { 00:14:15.827 "name": null, 00:14:15.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.827 "is_configured": false, 00:14:15.827 "data_offset": 0, 00:14:15.827 "data_size": 65536 00:14:15.827 }, 00:14:15.827 { 00:14:15.827 "name": "BaseBdev2", 00:14:15.827 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:15.827 "is_configured": true, 00:14:15.827 "data_offset": 0, 00:14:15.827 "data_size": 65536 00:14:15.827 } 00:14:15.827 ] 00:14:15.827 }' 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.827 11:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.827 [2024-11-15 11:25:58.711435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:15.827 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:15.827 Zero copy mechanism will not be used. 00:14:15.827 Running I/O for 60 seconds... 00:14:16.420 11:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.420 11:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.420 11:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.420 [2024-11-15 11:25:59.104361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.420 11:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.420 11:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:16.420 [2024-11-15 11:25:59.168367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:16.420 [2024-11-15 11:25:59.171153] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.420 [2024-11-15 11:25:59.306321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:16.680 [2024-11-15 11:25:59.418236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:16.680 [2024-11-15 11:25:59.418757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:16.941 183.00 IOPS, 549.00 MiB/s [2024-11-15T11:25:59.891Z] [2024-11-15 11:25:59.762945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:16.941 [2024-11-15 11:25:59.763922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:17.200 [2024-11-15 11:25:59.988963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:17.200 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.200 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.460 [2024-11-15 11:26:00.202178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:17.460 [2024-11-15 11:26:00.202898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.460 "name": "raid_bdev1", 00:14:17.460 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:17.460 "strip_size_kb": 0, 00:14:17.460 "state": "online", 00:14:17.460 "raid_level": "raid1", 00:14:17.460 "superblock": false, 00:14:17.460 "num_base_bdevs": 2, 00:14:17.460 "num_base_bdevs_discovered": 2, 00:14:17.460 "num_base_bdevs_operational": 2, 00:14:17.460 "process": { 00:14:17.460 "type": "rebuild", 00:14:17.460 "target": "spare", 00:14:17.460 "progress": { 00:14:17.460 "blocks": 12288, 00:14:17.460 "percent": 18 00:14:17.460 } 00:14:17.460 }, 00:14:17.460 "base_bdevs_list": [ 00:14:17.460 { 00:14:17.460 "name": "spare", 00:14:17.460 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:17.460 "is_configured": true, 00:14:17.460 "data_offset": 0, 00:14:17.460 "data_size": 65536 00:14:17.460 }, 00:14:17.460 { 00:14:17.460 "name": "BaseBdev2", 00:14:17.460 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:17.460 "is_configured": true, 00:14:17.460 "data_offset": 0, 00:14:17.460 "data_size": 65536 00:14:17.460 } 00:14:17.460 ] 00:14:17.460 }' 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.460 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.461 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:17.461 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.461 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.461 [2024-11-15 11:26:00.311329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.461 [2024-11-15 11:26:00.319222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:17.461 [2024-11-15 11:26:00.319549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:17.720 [2024-11-15 11:26:00.427660] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:17.720 [2024-11-15 11:26:00.443213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.720 [2024-11-15 11:26:00.443379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.720 [2024-11-15 11:26:00.443436] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:17.720 [2024-11-15 11:26:00.494460] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.720 "name": "raid_bdev1", 00:14:17.720 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:17.720 "strip_size_kb": 0, 00:14:17.720 "state": "online", 00:14:17.720 "raid_level": "raid1", 00:14:17.720 "superblock": false, 00:14:17.720 "num_base_bdevs": 2, 00:14:17.720 "num_base_bdevs_discovered": 1, 00:14:17.720 "num_base_bdevs_operational": 1, 00:14:17.720 "base_bdevs_list": [ 00:14:17.720 { 00:14:17.720 "name": null, 00:14:17.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.720 "is_configured": false, 00:14:17.720 "data_offset": 0, 00:14:17.720 "data_size": 65536 00:14:17.720 }, 00:14:17.720 { 00:14:17.720 "name": "BaseBdev2", 00:14:17.720 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:17.720 "is_configured": true, 00:14:17.720 "data_offset": 0, 00:14:17.720 "data_size": 65536 00:14:17.720 } 00:14:17.720 ] 00:14:17.720 }' 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.720 11:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.239 144.00 IOPS, 432.00 MiB/s [2024-11-15T11:26:01.189Z] 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.239 "name": "raid_bdev1", 00:14:18.239 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:18.239 "strip_size_kb": 0, 00:14:18.239 "state": "online", 00:14:18.239 "raid_level": "raid1", 00:14:18.239 "superblock": false, 00:14:18.239 "num_base_bdevs": 2, 00:14:18.239 "num_base_bdevs_discovered": 1, 00:14:18.239 "num_base_bdevs_operational": 1, 00:14:18.239 "base_bdevs_list": [ 00:14:18.239 { 00:14:18.239 "name": null, 00:14:18.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.239 "is_configured": false, 00:14:18.239 "data_offset": 0, 00:14:18.239 "data_size": 65536 00:14:18.239 }, 00:14:18.239 { 00:14:18.239 "name": "BaseBdev2", 00:14:18.239 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:18.239 "is_configured": true, 00:14:18.239 "data_offset": 0, 00:14:18.239 "data_size": 65536 00:14:18.239 } 00:14:18.239 ] 00:14:18.239 }' 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.239 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.499 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.499 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.499 11:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.499 11:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.499 [2024-11-15 11:26:01.193278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.499 11:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.499 11:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:18.499 [2024-11-15 11:26:01.251935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:18.499 [2024-11-15 11:26:01.254604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.499 [2024-11-15 11:26:01.372160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:18.499 [2024-11-15 11:26:01.372876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:18.759 [2024-11-15 11:26:01.589881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:18.759 [2024-11-15 11:26:01.590449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:19.018 157.00 IOPS, 471.00 MiB/s [2024-11-15T11:26:01.968Z] [2024-11-15 11:26:01.931628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:19.018 [2024-11-15 11:26:01.932082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:19.277 [2024-11-15 11:26:02.159958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:19.277 [2024-11-15 11:26:02.160363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.536 "name": "raid_bdev1", 00:14:19.536 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:19.536 "strip_size_kb": 0, 00:14:19.536 "state": "online", 00:14:19.536 "raid_level": "raid1", 00:14:19.536 "superblock": false, 00:14:19.536 "num_base_bdevs": 2, 00:14:19.536 "num_base_bdevs_discovered": 2, 00:14:19.536 "num_base_bdevs_operational": 2, 00:14:19.536 "process": { 00:14:19.536 "type": "rebuild", 00:14:19.536 "target": "spare", 00:14:19.536 "progress": { 00:14:19.536 "blocks": 10240, 00:14:19.536 "percent": 15 00:14:19.536 } 00:14:19.536 }, 00:14:19.536 "base_bdevs_list": [ 00:14:19.536 { 00:14:19.536 "name": "spare", 00:14:19.536 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:19.536 "is_configured": true, 00:14:19.536 "data_offset": 0, 00:14:19.536 "data_size": 65536 00:14:19.536 }, 00:14:19.536 { 00:14:19.536 "name": "BaseBdev2", 00:14:19.536 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:19.536 "is_configured": true, 00:14:19.536 "data_offset": 0, 00:14:19.536 "data_size": 65536 00:14:19.536 } 00:14:19.536 ] 00:14:19.536 }' 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=439 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.536 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.536 "name": "raid_bdev1", 00:14:19.536 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:19.536 "strip_size_kb": 0, 00:14:19.536 "state": "online", 00:14:19.536 "raid_level": "raid1", 00:14:19.536 "superblock": false, 00:14:19.536 "num_base_bdevs": 2, 00:14:19.536 "num_base_bdevs_discovered": 2, 00:14:19.536 "num_base_bdevs_operational": 2, 00:14:19.536 "process": { 00:14:19.536 "type": "rebuild", 00:14:19.536 "target": "spare", 00:14:19.536 "progress": { 00:14:19.537 "blocks": 12288, 00:14:19.537 "percent": 18 00:14:19.537 } 00:14:19.537 }, 00:14:19.537 "base_bdevs_list": [ 00:14:19.537 { 00:14:19.537 "name": "spare", 00:14:19.537 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:19.537 "is_configured": true, 00:14:19.537 "data_offset": 0, 00:14:19.537 "data_size": 65536 00:14:19.537 }, 00:14:19.537 { 00:14:19.537 "name": "BaseBdev2", 00:14:19.537 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:19.537 "is_configured": true, 00:14:19.537 "data_offset": 0, 00:14:19.537 "data_size": 65536 00:14:19.537 } 00:14:19.537 ] 00:14:19.537 }' 00:14:19.537 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.537 [2024-11-15 11:26:02.483163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:19.797 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.797 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.797 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.797 11:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.797 [2024-11-15 11:26:02.699645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:20.366 140.75 IOPS, 422.25 MiB/s [2024-11-15T11:26:03.316Z] [2024-11-15 11:26:03.020708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:20.366 [2024-11-15 11:26:03.151159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:20.624 [2024-11-15 11:26:03.376275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:20.624 [2024-11-15 11:26:03.376772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.624 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.883 11:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.883 [2024-11-15 11:26:03.585266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:20.883 [2024-11-15 11:26:03.585749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:20.883 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.883 "name": "raid_bdev1", 00:14:20.883 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:20.883 "strip_size_kb": 0, 00:14:20.883 "state": "online", 00:14:20.883 "raid_level": "raid1", 00:14:20.883 "superblock": false, 00:14:20.883 "num_base_bdevs": 2, 00:14:20.883 "num_base_bdevs_discovered": 2, 00:14:20.883 "num_base_bdevs_operational": 2, 00:14:20.883 "process": { 00:14:20.883 "type": "rebuild", 00:14:20.883 "target": "spare", 00:14:20.883 "progress": { 00:14:20.883 "blocks": 26624, 00:14:20.883 "percent": 40 00:14:20.883 } 00:14:20.883 }, 00:14:20.883 "base_bdevs_list": [ 00:14:20.883 { 00:14:20.883 "name": "spare", 00:14:20.883 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:20.883 "is_configured": true, 00:14:20.883 "data_offset": 0, 00:14:20.883 "data_size": 65536 00:14:20.883 }, 00:14:20.883 { 00:14:20.883 "name": "BaseBdev2", 00:14:20.883 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:20.883 "is_configured": true, 00:14:20.883 "data_offset": 0, 00:14:20.883 "data_size": 65536 00:14:20.883 } 00:14:20.883 ] 00:14:20.883 }' 00:14:20.883 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.883 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.883 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.883 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.883 11:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.142 124.00 IOPS, 372.00 MiB/s [2024-11-15T11:26:04.092Z] [2024-11-15 11:26:04.034116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:21.795 [2024-11-15 11:26:04.495063] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.795 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.795 110.67 IOPS, 332.00 MiB/s [2024-11-15T11:26:04.745Z] 11:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.054 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.054 "name": "raid_bdev1", 00:14:22.054 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:22.054 "strip_size_kb": 0, 00:14:22.054 "state": "online", 00:14:22.054 "raid_level": "raid1", 00:14:22.054 "superblock": false, 00:14:22.054 "num_base_bdevs": 2, 00:14:22.054 "num_base_bdevs_discovered": 2, 00:14:22.054 "num_base_bdevs_operational": 2, 00:14:22.054 "process": { 00:14:22.054 "type": "rebuild", 00:14:22.054 "target": "spare", 00:14:22.054 "progress": { 00:14:22.054 "blocks": 43008, 00:14:22.054 "percent": 65 00:14:22.054 } 00:14:22.054 }, 00:14:22.054 "base_bdevs_list": [ 00:14:22.054 { 00:14:22.054 "name": "spare", 00:14:22.054 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:22.054 "is_configured": true, 00:14:22.054 "data_offset": 0, 00:14:22.054 "data_size": 65536 00:14:22.054 }, 00:14:22.054 { 00:14:22.054 "name": "BaseBdev2", 00:14:22.054 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:22.054 "is_configured": true, 00:14:22.054 "data_offset": 0, 00:14:22.054 "data_size": 65536 00:14:22.054 } 00:14:22.054 ] 00:14:22.054 }' 00:14:22.054 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.054 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.054 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.054 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.054 11:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.313 [2024-11-15 11:26:05.086008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:22.573 [2024-11-15 11:26:05.518144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:23.091 99.86 IOPS, 299.57 MiB/s [2024-11-15T11:26:06.041Z] 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.091 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.091 "name": "raid_bdev1", 00:14:23.091 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:23.091 "strip_size_kb": 0, 00:14:23.091 "state": "online", 00:14:23.091 "raid_level": "raid1", 00:14:23.091 "superblock": false, 00:14:23.091 "num_base_bdevs": 2, 00:14:23.091 "num_base_bdevs_discovered": 2, 00:14:23.091 "num_base_bdevs_operational": 2, 00:14:23.091 "process": { 00:14:23.091 "type": "rebuild", 00:14:23.091 "target": "spare", 00:14:23.091 "progress": { 00:14:23.091 "blocks": 63488, 00:14:23.091 "percent": 96 00:14:23.091 } 00:14:23.091 }, 00:14:23.091 "base_bdevs_list": [ 00:14:23.091 { 00:14:23.091 "name": "spare", 00:14:23.091 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:23.091 "is_configured": true, 00:14:23.091 "data_offset": 0, 00:14:23.091 "data_size": 65536 00:14:23.091 }, 00:14:23.091 { 00:14:23.091 "name": "BaseBdev2", 00:14:23.092 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:23.092 "is_configured": true, 00:14:23.092 "data_offset": 0, 00:14:23.092 "data_size": 65536 00:14:23.092 } 00:14:23.092 ] 00:14:23.092 }' 00:14:23.092 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.092 [2024-11-15 11:26:05.964739] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:23.092 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.092 11:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.092 11:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.092 11:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.351 [2024-11-15 11:26:06.064724] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:23.351 [2024-11-15 11:26:06.074553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.179 92.38 IOPS, 277.12 MiB/s [2024-11-15T11:26:07.129Z] 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.179 "name": "raid_bdev1", 00:14:24.179 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:24.179 "strip_size_kb": 0, 00:14:24.179 "state": "online", 00:14:24.179 "raid_level": "raid1", 00:14:24.179 "superblock": false, 00:14:24.179 "num_base_bdevs": 2, 00:14:24.179 "num_base_bdevs_discovered": 2, 00:14:24.179 "num_base_bdevs_operational": 2, 00:14:24.179 "base_bdevs_list": [ 00:14:24.179 { 00:14:24.179 "name": "spare", 00:14:24.179 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:24.179 "is_configured": true, 00:14:24.179 "data_offset": 0, 00:14:24.179 "data_size": 65536 00:14:24.179 }, 00:14:24.179 { 00:14:24.179 "name": "BaseBdev2", 00:14:24.179 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:24.179 "is_configured": true, 00:14:24.179 "data_offset": 0, 00:14:24.179 "data_size": 65536 00:14:24.179 } 00:14:24.179 ] 00:14:24.179 }' 00:14:24.179 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.439 "name": "raid_bdev1", 00:14:24.439 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:24.439 "strip_size_kb": 0, 00:14:24.439 "state": "online", 00:14:24.439 "raid_level": "raid1", 00:14:24.439 "superblock": false, 00:14:24.439 "num_base_bdevs": 2, 00:14:24.439 "num_base_bdevs_discovered": 2, 00:14:24.439 "num_base_bdevs_operational": 2, 00:14:24.439 "base_bdevs_list": [ 00:14:24.439 { 00:14:24.439 "name": "spare", 00:14:24.439 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:24.439 "is_configured": true, 00:14:24.439 "data_offset": 0, 00:14:24.439 "data_size": 65536 00:14:24.439 }, 00:14:24.439 { 00:14:24.439 "name": "BaseBdev2", 00:14:24.439 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:24.439 "is_configured": true, 00:14:24.439 "data_offset": 0, 00:14:24.439 "data_size": 65536 00:14:24.439 } 00:14:24.439 ] 00:14:24.439 }' 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.439 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.699 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.699 "name": "raid_bdev1", 00:14:24.699 "uuid": "2550c1ad-f8da-40f7-b2d9-1331a2b6ba20", 00:14:24.699 "strip_size_kb": 0, 00:14:24.699 "state": "online", 00:14:24.699 "raid_level": "raid1", 00:14:24.699 "superblock": false, 00:14:24.699 "num_base_bdevs": 2, 00:14:24.699 "num_base_bdevs_discovered": 2, 00:14:24.699 "num_base_bdevs_operational": 2, 00:14:24.699 "base_bdevs_list": [ 00:14:24.699 { 00:14:24.699 "name": "spare", 00:14:24.699 "uuid": "b90ae44e-6d40-5506-bb04-89919b27eb02", 00:14:24.699 "is_configured": true, 00:14:24.699 "data_offset": 0, 00:14:24.699 "data_size": 65536 00:14:24.699 }, 00:14:24.699 { 00:14:24.699 "name": "BaseBdev2", 00:14:24.699 "uuid": "b42a26de-dbd6-5efe-a18e-acbdb9b40a26", 00:14:24.699 "is_configured": true, 00:14:24.699 "data_offset": 0, 00:14:24.699 "data_size": 65536 00:14:24.699 } 00:14:24.699 ] 00:14:24.699 }' 00:14:24.699 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.699 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.958 86.67 IOPS, 260.00 MiB/s [2024-11-15T11:26:07.908Z] 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.958 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.958 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.958 [2024-11-15 11:26:07.875376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.958 [2024-11-15 11:26:07.875416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.216 00:14:25.216 Latency(us) 00:14:25.216 [2024-11-15T11:26:08.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.216 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:25.216 raid_bdev1 : 9.23 85.06 255.19 0.00 0.00 15903.77 281.13 116773.24 00:14:25.216 [2024-11-15T11:26:08.166Z] =================================================================================================================== 00:14:25.216 [2024-11-15T11:26:08.166Z] Total : 85.06 255.19 0.00 0.00 15903.77 281.13 116773.24 00:14:25.216 [2024-11-15 11:26:07.959449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.216 [2024-11-15 11:26:07.959535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.216 [2024-11-15 11:26:07.959633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.216 [2024-11-15 11:26:07.959649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:25.216 { 00:14:25.216 "results": [ 00:14:25.216 { 00:14:25.216 "job": "raid_bdev1", 00:14:25.216 "core_mask": "0x1", 00:14:25.216 "workload": "randrw", 00:14:25.216 "percentage": 50, 00:14:25.216 "status": "finished", 00:14:25.216 "queue_depth": 2, 00:14:25.216 "io_size": 3145728, 00:14:25.216 "runtime": 9.228588, 00:14:25.216 "iops": 85.06176676215256, 00:14:25.216 "mibps": 255.18530028645768, 00:14:25.216 "io_failed": 0, 00:14:25.216 "io_timeout": 0, 00:14:25.216 "avg_latency_us": 15903.7706821077, 00:14:25.216 "min_latency_us": 281.13454545454545, 00:14:25.216 "max_latency_us": 116773.23636363636 00:14:25.216 } 00:14:25.216 ], 00:14:25.216 "core_count": 1 00:14:25.216 } 00:14:25.216 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.216 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.217 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.217 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 11:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:25.217 11:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.217 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:25.475 /dev/nbd0 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.476 1+0 records in 00:14:25.476 1+0 records out 00:14:25.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401164 s, 10.2 MB/s 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.476 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:25.740 /dev/nbd1 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:25.740 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.740 1+0 records in 00:14:25.741 1+0 records out 00:14:25.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036013 s, 11.4 MB/s 00:14:25.741 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.741 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:25.741 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.741 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:25.741 11:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:25.741 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.741 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.741 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:25.999 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:25.999 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.999 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:25.999 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.999 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:25.999 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.999 11:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.258 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76570 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76570 ']' 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76570 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76570 00:14:26.518 killing process with pid 76570 00:14:26.518 Received shutdown signal, test time was about 10.724137 seconds 00:14:26.518 00:14:26.518 Latency(us) 00:14:26.518 [2024-11-15T11:26:09.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.518 [2024-11-15T11:26:09.468Z] =================================================================================================================== 00:14:26.518 [2024-11-15T11:26:09.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76570' 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76570 00:14:26.518 [2024-11-15 11:26:09.438843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.518 11:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76570 00:14:26.776 [2024-11-15 11:26:09.621244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:28.152 00:14:28.152 real 0m14.090s 00:14:28.152 user 0m18.077s 00:14:28.152 sys 0m1.557s 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.152 ************************************ 00:14:28.152 END TEST raid_rebuild_test_io 00:14:28.152 ************************************ 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.152 11:26:10 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:28.152 11:26:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:28.152 11:26:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.152 11:26:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.152 ************************************ 00:14:28.152 START TEST raid_rebuild_test_sb_io 00:14:28.152 ************************************ 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76972 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76972 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 76972 ']' 00:14:28.152 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.153 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:28.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.153 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.153 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:28.153 11:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.153 [2024-11-15 11:26:10.908136] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:28.153 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:28.153 Zero copy mechanism will not be used. 00:14:28.153 [2024-11-15 11:26:10.908340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76972 ] 00:14:28.153 [2024-11-15 11:26:11.083812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.410 [2024-11-15 11:26:11.226811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.669 [2024-11-15 11:26:11.444675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.669 [2024-11-15 11:26:11.444789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.927 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:28.927 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:14:28.927 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.927 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:28.927 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.927 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 BaseBdev1_malloc 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 [2024-11-15 11:26:11.914516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.186 [2024-11-15 11:26:11.914626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.186 [2024-11-15 11:26:11.914675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.186 [2024-11-15 11:26:11.914695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.186 [2024-11-15 11:26:11.917745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.186 [2024-11-15 11:26:11.917822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.186 BaseBdev1 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 BaseBdev2_malloc 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 [2024-11-15 11:26:11.969931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:29.186 [2024-11-15 11:26:11.970026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.186 [2024-11-15 11:26:11.970089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.186 [2024-11-15 11:26:11.970113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.186 [2024-11-15 11:26:11.973040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.186 [2024-11-15 11:26:11.973103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.186 BaseBdev2 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.186 11:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 spare_malloc 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 spare_delay 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 [2024-11-15 11:26:12.042239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.186 [2024-11-15 11:26:12.042327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.186 [2024-11-15 11:26:12.042361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:29.186 [2024-11-15 11:26:12.042381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.186 [2024-11-15 11:26:12.045738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.186 [2024-11-15 11:26:12.045817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.186 spare 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 [2024-11-15 11:26:12.050580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.186 [2024-11-15 11:26:12.053167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.186 [2024-11-15 11:26:12.053442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:29.186 [2024-11-15 11:26:12.053480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:29.186 [2024-11-15 11:26:12.053815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.186 [2024-11-15 11:26:12.054102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:29.186 [2024-11-15 11:26:12.054122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:29.186 [2024-11-15 11:26:12.054344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.186 "name": "raid_bdev1", 00:14:29.186 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:29.186 "strip_size_kb": 0, 00:14:29.186 "state": "online", 00:14:29.186 "raid_level": "raid1", 00:14:29.186 "superblock": true, 00:14:29.186 "num_base_bdevs": 2, 00:14:29.186 "num_base_bdevs_discovered": 2, 00:14:29.186 "num_base_bdevs_operational": 2, 00:14:29.186 "base_bdevs_list": [ 00:14:29.186 { 00:14:29.186 "name": "BaseBdev1", 00:14:29.186 "uuid": "1adfb9c3-4b9e-5cd6-85b3-5fcc9b69af2a", 00:14:29.186 "is_configured": true, 00:14:29.186 "data_offset": 2048, 00:14:29.186 "data_size": 63488 00:14:29.186 }, 00:14:29.186 { 00:14:29.186 "name": "BaseBdev2", 00:14:29.186 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:29.186 "is_configured": true, 00:14:29.186 "data_offset": 2048, 00:14:29.186 "data_size": 63488 00:14:29.186 } 00:14:29.186 ] 00:14:29.186 }' 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.186 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.753 [2024-11-15 11:26:12.559321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:29.753 [2024-11-15 11:26:12.666875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.753 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.011 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.011 "name": "raid_bdev1", 00:14:30.011 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:30.011 "strip_size_kb": 0, 00:14:30.011 "state": "online", 00:14:30.011 "raid_level": "raid1", 00:14:30.011 "superblock": true, 00:14:30.011 "num_base_bdevs": 2, 00:14:30.011 "num_base_bdevs_discovered": 1, 00:14:30.011 "num_base_bdevs_operational": 1, 00:14:30.011 "base_bdevs_list": [ 00:14:30.011 { 00:14:30.011 "name": null, 00:14:30.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.011 "is_configured": false, 00:14:30.011 "data_offset": 0, 00:14:30.011 "data_size": 63488 00:14:30.011 }, 00:14:30.011 { 00:14:30.011 "name": "BaseBdev2", 00:14:30.011 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:30.011 "is_configured": true, 00:14:30.011 "data_offset": 2048, 00:14:30.011 "data_size": 63488 00:14:30.011 } 00:14:30.011 ] 00:14:30.011 }' 00:14:30.011 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.011 11:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.011 [2024-11-15 11:26:12.779641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:30.011 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:30.011 Zero copy mechanism will not be used. 00:14:30.011 Running I/O for 60 seconds... 00:14:30.270 11:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.270 11:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.270 11:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.270 [2024-11-15 11:26:13.199230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.527 11:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.527 11:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:30.527 [2024-11-15 11:26:13.257473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:30.527 [2024-11-15 11:26:13.260074] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.528 [2024-11-15 11:26:13.363243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.528 [2024-11-15 11:26:13.363799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.786 [2024-11-15 11:26:13.590564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:30.786 [2024-11-15 11:26:13.591107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:31.045 203.00 IOPS, 609.00 MiB/s [2024-11-15T11:26:13.995Z] [2024-11-15 11:26:13.937031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:31.313 [2024-11-15 11:26:14.088129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.313 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.585 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.585 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.585 "name": "raid_bdev1", 00:14:31.585 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:31.585 "strip_size_kb": 0, 00:14:31.585 "state": "online", 00:14:31.585 "raid_level": "raid1", 00:14:31.585 "superblock": true, 00:14:31.585 "num_base_bdevs": 2, 00:14:31.585 "num_base_bdevs_discovered": 2, 00:14:31.585 "num_base_bdevs_operational": 2, 00:14:31.585 "process": { 00:14:31.585 "type": "rebuild", 00:14:31.585 "target": "spare", 00:14:31.585 "progress": { 00:14:31.585 "blocks": 12288, 00:14:31.585 "percent": 19 00:14:31.585 } 00:14:31.585 }, 00:14:31.585 "base_bdevs_list": [ 00:14:31.585 { 00:14:31.585 "name": "spare", 00:14:31.585 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:31.585 "is_configured": true, 00:14:31.585 "data_offset": 2048, 00:14:31.585 "data_size": 63488 00:14:31.585 }, 00:14:31.585 { 00:14:31.585 "name": "BaseBdev2", 00:14:31.585 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:31.585 "is_configured": true, 00:14:31.585 "data_offset": 2048, 00:14:31.585 "data_size": 63488 00:14:31.585 } 00:14:31.585 ] 00:14:31.585 }' 00:14:31.585 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.585 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.586 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.586 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.586 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:31.586 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.586 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.586 [2024-11-15 11:26:14.416389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.586 [2024-11-15 11:26:14.460165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.844 [2024-11-15 11:26:14.569570] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.844 [2024-11-15 11:26:14.580398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.844 [2024-11-15 11:26:14.580475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.844 [2024-11-15 11:26:14.580494] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.844 [2024-11-15 11:26:14.616372] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.844 "name": "raid_bdev1", 00:14:31.844 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:31.844 "strip_size_kb": 0, 00:14:31.844 "state": "online", 00:14:31.844 "raid_level": "raid1", 00:14:31.844 "superblock": true, 00:14:31.844 "num_base_bdevs": 2, 00:14:31.844 "num_base_bdevs_discovered": 1, 00:14:31.844 "num_base_bdevs_operational": 1, 00:14:31.844 "base_bdevs_list": [ 00:14:31.844 { 00:14:31.844 "name": null, 00:14:31.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.844 "is_configured": false, 00:14:31.844 "data_offset": 0, 00:14:31.844 "data_size": 63488 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "name": "BaseBdev2", 00:14:31.844 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:31.844 "is_configured": true, 00:14:31.844 "data_offset": 2048, 00:14:31.844 "data_size": 63488 00:14:31.844 } 00:14:31.844 ] 00:14:31.844 }' 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.844 11:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.360 166.50 IOPS, 499.50 MiB/s [2024-11-15T11:26:15.310Z] 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.360 "name": "raid_bdev1", 00:14:32.360 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:32.360 "strip_size_kb": 0, 00:14:32.360 "state": "online", 00:14:32.360 "raid_level": "raid1", 00:14:32.360 "superblock": true, 00:14:32.360 "num_base_bdevs": 2, 00:14:32.360 "num_base_bdevs_discovered": 1, 00:14:32.360 "num_base_bdevs_operational": 1, 00:14:32.360 "base_bdevs_list": [ 00:14:32.360 { 00:14:32.360 "name": null, 00:14:32.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.360 "is_configured": false, 00:14:32.360 "data_offset": 0, 00:14:32.360 "data_size": 63488 00:14:32.360 }, 00:14:32.360 { 00:14:32.360 "name": "BaseBdev2", 00:14:32.360 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:32.360 "is_configured": true, 00:14:32.360 "data_offset": 2048, 00:14:32.360 "data_size": 63488 00:14:32.360 } 00:14:32.360 ] 00:14:32.360 }' 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.360 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.619 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.619 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.619 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.619 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.619 [2024-11-15 11:26:15.336799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.619 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.619 11:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:32.619 [2024-11-15 11:26:15.398146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:32.619 [2024-11-15 11:26:15.400990] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.619 [2024-11-15 11:26:15.518556] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:32.619 [2024-11-15 11:26:15.519199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:32.879 [2024-11-15 11:26:15.623811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:32.879 [2024-11-15 11:26:15.624071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:33.138 168.67 IOPS, 506.00 MiB/s [2024-11-15T11:26:16.088Z] [2024-11-15 11:26:15.943890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:33.398 [2024-11-15 11:26:16.170359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:33.398 [2024-11-15 11:26:16.170977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.656 "name": "raid_bdev1", 00:14:33.656 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:33.656 "strip_size_kb": 0, 00:14:33.656 "state": "online", 00:14:33.656 "raid_level": "raid1", 00:14:33.656 "superblock": true, 00:14:33.656 "num_base_bdevs": 2, 00:14:33.656 "num_base_bdevs_discovered": 2, 00:14:33.656 "num_base_bdevs_operational": 2, 00:14:33.656 "process": { 00:14:33.656 "type": "rebuild", 00:14:33.656 "target": "spare", 00:14:33.656 "progress": { 00:14:33.656 "blocks": 12288, 00:14:33.656 "percent": 19 00:14:33.656 } 00:14:33.656 }, 00:14:33.656 "base_bdevs_list": [ 00:14:33.656 { 00:14:33.656 "name": "spare", 00:14:33.656 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:33.656 "is_configured": true, 00:14:33.656 "data_offset": 2048, 00:14:33.656 "data_size": 63488 00:14:33.656 }, 00:14:33.656 { 00:14:33.656 "name": "BaseBdev2", 00:14:33.656 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:33.656 "is_configured": true, 00:14:33.656 "data_offset": 2048, 00:14:33.656 "data_size": 63488 00:14:33.656 } 00:14:33.656 ] 00:14:33.656 }' 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:33.656 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.656 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.915 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.915 "name": "raid_bdev1", 00:14:33.915 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:33.915 "strip_size_kb": 0, 00:14:33.915 "state": "online", 00:14:33.915 "raid_level": "raid1", 00:14:33.915 "superblock": true, 00:14:33.915 "num_base_bdevs": 2, 00:14:33.915 "num_base_bdevs_discovered": 2, 00:14:33.915 "num_base_bdevs_operational": 2, 00:14:33.915 "process": { 00:14:33.915 "type": "rebuild", 00:14:33.915 "target": "spare", 00:14:33.915 "progress": { 00:14:33.915 "blocks": 14336, 00:14:33.915 "percent": 22 00:14:33.915 } 00:14:33.915 }, 00:14:33.915 "base_bdevs_list": [ 00:14:33.915 { 00:14:33.915 "name": "spare", 00:14:33.915 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:33.915 "is_configured": true, 00:14:33.915 "data_offset": 2048, 00:14:33.915 "data_size": 63488 00:14:33.915 }, 00:14:33.915 { 00:14:33.915 "name": "BaseBdev2", 00:14:33.915 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:33.915 "is_configured": true, 00:14:33.915 "data_offset": 2048, 00:14:33.915 "data_size": 63488 00:14:33.915 } 00:14:33.915 ] 00:14:33.915 }' 00:14:33.915 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.915 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.915 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.915 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.915 11:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.915 141.50 IOPS, 424.50 MiB/s [2024-11-15T11:26:16.865Z] [2024-11-15 11:26:16.859178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:33.915 [2024-11-15 11:26:16.859775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:34.174 [2024-11-15 11:26:16.984555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:34.433 [2024-11-15 11:26:17.212445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:34.433 [2024-11-15 11:26:17.213356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:34.999 [2024-11-15 11:26:17.687351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.999 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.999 "name": "raid_bdev1", 00:14:34.999 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:34.999 "strip_size_kb": 0, 00:14:34.999 "state": "online", 00:14:34.999 "raid_level": "raid1", 00:14:35.000 "superblock": true, 00:14:35.000 "num_base_bdevs": 2, 00:14:35.000 "num_base_bdevs_discovered": 2, 00:14:35.000 "num_base_bdevs_operational": 2, 00:14:35.000 "process": { 00:14:35.000 "type": "rebuild", 00:14:35.000 "target": "spare", 00:14:35.000 "progress": { 00:14:35.000 "blocks": 32768, 00:14:35.000 "percent": 51 00:14:35.000 } 00:14:35.000 }, 00:14:35.000 "base_bdevs_list": [ 00:14:35.000 { 00:14:35.000 "name": "spare", 00:14:35.000 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:35.000 "is_configured": true, 00:14:35.000 "data_offset": 2048, 00:14:35.000 "data_size": 63488 00:14:35.000 }, 00:14:35.000 { 00:14:35.000 "name": "BaseBdev2", 00:14:35.000 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:35.000 "is_configured": true, 00:14:35.000 "data_offset": 2048, 00:14:35.000 "data_size": 63488 00:14:35.000 } 00:14:35.000 ] 00:14:35.000 }' 00:14:35.000 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.000 127.40 IOPS, 382.20 MiB/s [2024-11-15T11:26:17.950Z] 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.000 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.000 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.000 11:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.000 [2024-11-15 11:26:17.907314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:35.000 [2024-11-15 11:26:17.907874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:35.936 [2024-11-15 11:26:18.571031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:35.936 111.83 IOPS, 335.50 MiB/s [2024-11-15T11:26:18.886Z] 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.936 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.936 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.936 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.936 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.936 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.195 [2024-11-15 11:26:18.940019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.195 "name": "raid_bdev1", 00:14:36.195 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:36.195 "strip_size_kb": 0, 00:14:36.195 "state": "online", 00:14:36.195 "raid_level": "raid1", 00:14:36.195 "superblock": true, 00:14:36.195 "num_base_bdevs": 2, 00:14:36.195 "num_base_bdevs_discovered": 2, 00:14:36.195 "num_base_bdevs_operational": 2, 00:14:36.195 "process": { 00:14:36.195 "type": "rebuild", 00:14:36.195 "target": "spare", 00:14:36.195 "progress": { 00:14:36.195 "blocks": 49152, 00:14:36.195 "percent": 77 00:14:36.195 } 00:14:36.195 }, 00:14:36.195 "base_bdevs_list": [ 00:14:36.195 { 00:14:36.195 "name": "spare", 00:14:36.195 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:36.195 "is_configured": true, 00:14:36.195 "data_offset": 2048, 00:14:36.195 "data_size": 63488 00:14:36.195 }, 00:14:36.195 { 00:14:36.195 "name": "BaseBdev2", 00:14:36.195 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:36.195 "is_configured": true, 00:14:36.195 "data_offset": 2048, 00:14:36.195 "data_size": 63488 00:14:36.195 } 00:14:36.195 ] 00:14:36.195 }' 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.195 11:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.195 11:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.195 11:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.196 [2024-11-15 11:26:19.050119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:36.196 [2024-11-15 11:26:19.050464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:36.454 [2024-11-15 11:26:19.387353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:37.021 102.00 IOPS, 306.00 MiB/s [2024-11-15T11:26:19.971Z] [2024-11-15 11:26:19.838081] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:37.021 [2024-11-15 11:26:19.944616] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:37.021 [2024-11-15 11:26:19.948516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.280 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.281 "name": "raid_bdev1", 00:14:37.281 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:37.281 "strip_size_kb": 0, 00:14:37.281 "state": "online", 00:14:37.281 "raid_level": "raid1", 00:14:37.281 "superblock": true, 00:14:37.281 "num_base_bdevs": 2, 00:14:37.281 "num_base_bdevs_discovered": 2, 00:14:37.281 "num_base_bdevs_operational": 2, 00:14:37.281 "base_bdevs_list": [ 00:14:37.281 { 00:14:37.281 "name": "spare", 00:14:37.281 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:37.281 "is_configured": true, 00:14:37.281 "data_offset": 2048, 00:14:37.281 "data_size": 63488 00:14:37.281 }, 00:14:37.281 { 00:14:37.281 "name": "BaseBdev2", 00:14:37.281 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:37.281 "is_configured": true, 00:14:37.281 "data_offset": 2048, 00:14:37.281 "data_size": 63488 00:14:37.281 } 00:14:37.281 ] 00:14:37.281 }' 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.281 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.553 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.553 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.553 "name": "raid_bdev1", 00:14:37.553 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:37.553 "strip_size_kb": 0, 00:14:37.553 "state": "online", 00:14:37.553 "raid_level": "raid1", 00:14:37.553 "superblock": true, 00:14:37.553 "num_base_bdevs": 2, 00:14:37.553 "num_base_bdevs_discovered": 2, 00:14:37.553 "num_base_bdevs_operational": 2, 00:14:37.553 "base_bdevs_list": [ 00:14:37.553 { 00:14:37.553 "name": "spare", 00:14:37.553 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:37.553 "is_configured": true, 00:14:37.553 "data_offset": 2048, 00:14:37.553 "data_size": 63488 00:14:37.553 }, 00:14:37.553 { 00:14:37.553 "name": "BaseBdev2", 00:14:37.553 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:37.553 "is_configured": true, 00:14:37.553 "data_offset": 2048, 00:14:37.553 "data_size": 63488 00:14:37.553 } 00:14:37.553 ] 00:14:37.553 }' 00:14:37.553 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.553 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.553 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.553 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.553 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.553 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.554 "name": "raid_bdev1", 00:14:37.554 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:37.554 "strip_size_kb": 0, 00:14:37.554 "state": "online", 00:14:37.554 "raid_level": "raid1", 00:14:37.554 "superblock": true, 00:14:37.554 "num_base_bdevs": 2, 00:14:37.554 "num_base_bdevs_discovered": 2, 00:14:37.554 "num_base_bdevs_operational": 2, 00:14:37.554 "base_bdevs_list": [ 00:14:37.554 { 00:14:37.554 "name": "spare", 00:14:37.554 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:37.554 "is_configured": true, 00:14:37.554 "data_offset": 2048, 00:14:37.554 "data_size": 63488 00:14:37.554 }, 00:14:37.554 { 00:14:37.554 "name": "BaseBdev2", 00:14:37.554 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:37.554 "is_configured": true, 00:14:37.554 "data_offset": 2048, 00:14:37.554 "data_size": 63488 00:14:37.554 } 00:14:37.554 ] 00:14:37.554 }' 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.554 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.121 93.12 IOPS, 279.38 MiB/s [2024-11-15T11:26:21.071Z] 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.121 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.121 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.121 [2024-11-15 11:26:20.887779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.121 [2024-11-15 11:26:20.887838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.121 00:14:38.121 Latency(us) 00:14:38.121 [2024-11-15T11:26:21.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.121 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:38.121 raid_bdev1 : 8.17 91.73 275.19 0.00 0.00 14071.67 286.72 116296.61 00:14:38.121 [2024-11-15T11:26:21.071Z] =================================================================================================================== 00:14:38.121 [2024-11-15T11:26:21.071Z] Total : 91.73 275.19 0.00 0.00 14071.67 286.72 116296.61 00:14:38.121 [2024-11-15 11:26:20.964579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.121 [2024-11-15 11:26:20.964679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.121 [2024-11-15 11:26:20.964776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.121 [2024-11-15 11:26:20.964797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:38.121 { 00:14:38.121 "results": [ 00:14:38.121 { 00:14:38.121 "job": "raid_bdev1", 00:14:38.121 "core_mask": "0x1", 00:14:38.121 "workload": "randrw", 00:14:38.121 "percentage": 50, 00:14:38.121 "status": "finished", 00:14:38.121 "queue_depth": 2, 00:14:38.121 "io_size": 3145728, 00:14:38.121 "runtime": 8.165244, 00:14:38.121 "iops": 91.73026550094522, 00:14:38.121 "mibps": 275.19079650283567, 00:14:38.121 "io_failed": 0, 00:14:38.121 "io_timeout": 0, 00:14:38.121 "avg_latency_us": 14071.665318606627, 00:14:38.121 "min_latency_us": 286.72, 00:14:38.121 "max_latency_us": 116296.61090909092 00:14:38.121 } 00:14:38.121 ], 00:14:38.121 "core_count": 1 00:14:38.121 } 00:14:38.121 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.121 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.121 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.121 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.121 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:38.121 11:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.121 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:38.689 /dev/nbd0 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.689 1+0 records in 00:14:38.689 1+0 records out 00:14:38.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003568 s, 11.5 MB/s 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.689 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.690 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:38.949 /dev/nbd1 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.949 1+0 records in 00:14:38.949 1+0 records out 00:14:38.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362643 s, 11.3 MB/s 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.949 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:39.208 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:39.208 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.208 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:39.208 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.208 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:39.208 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.208 11:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.467 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.726 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.726 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.726 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.726 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.726 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.726 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.727 [2024-11-15 11:26:22.517971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:39.727 [2024-11-15 11:26:22.518101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.727 [2024-11-15 11:26:22.518139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:39.727 [2024-11-15 11:26:22.518158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.727 [2024-11-15 11:26:22.521167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.727 [2024-11-15 11:26:22.521277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:39.727 [2024-11-15 11:26:22.521421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:39.727 [2024-11-15 11:26:22.521501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.727 [2024-11-15 11:26:22.521714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.727 spare 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.727 [2024-11-15 11:26:22.621868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:39.727 [2024-11-15 11:26:22.621902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.727 [2024-11-15 11:26:22.622389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:39.727 [2024-11-15 11:26:22.622697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:39.727 [2024-11-15 11:26:22.622899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:39.727 [2024-11-15 11:26:22.623175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.727 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.986 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.986 "name": "raid_bdev1", 00:14:39.986 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:39.986 "strip_size_kb": 0, 00:14:39.986 "state": "online", 00:14:39.986 "raid_level": "raid1", 00:14:39.986 "superblock": true, 00:14:39.986 "num_base_bdevs": 2, 00:14:39.986 "num_base_bdevs_discovered": 2, 00:14:39.986 "num_base_bdevs_operational": 2, 00:14:39.986 "base_bdevs_list": [ 00:14:39.986 { 00:14:39.986 "name": "spare", 00:14:39.986 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:39.986 "is_configured": true, 00:14:39.986 "data_offset": 2048, 00:14:39.986 "data_size": 63488 00:14:39.986 }, 00:14:39.986 { 00:14:39.986 "name": "BaseBdev2", 00:14:39.986 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:39.986 "is_configured": true, 00:14:39.986 "data_offset": 2048, 00:14:39.986 "data_size": 63488 00:14:39.986 } 00:14:39.986 ] 00:14:39.986 }' 00:14:39.986 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.986 11:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.247 "name": "raid_bdev1", 00:14:40.247 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:40.247 "strip_size_kb": 0, 00:14:40.247 "state": "online", 00:14:40.247 "raid_level": "raid1", 00:14:40.247 "superblock": true, 00:14:40.247 "num_base_bdevs": 2, 00:14:40.247 "num_base_bdevs_discovered": 2, 00:14:40.247 "num_base_bdevs_operational": 2, 00:14:40.247 "base_bdevs_list": [ 00:14:40.247 { 00:14:40.247 "name": "spare", 00:14:40.247 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:40.247 "is_configured": true, 00:14:40.247 "data_offset": 2048, 00:14:40.247 "data_size": 63488 00:14:40.247 }, 00:14:40.247 { 00:14:40.247 "name": "BaseBdev2", 00:14:40.247 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:40.247 "is_configured": true, 00:14:40.247 "data_offset": 2048, 00:14:40.247 "data_size": 63488 00:14:40.247 } 00:14:40.247 ] 00:14:40.247 }' 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.247 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.507 [2024-11-15 11:26:23.275425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.507 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.508 "name": "raid_bdev1", 00:14:40.508 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:40.508 "strip_size_kb": 0, 00:14:40.508 "state": "online", 00:14:40.508 "raid_level": "raid1", 00:14:40.508 "superblock": true, 00:14:40.508 "num_base_bdevs": 2, 00:14:40.508 "num_base_bdevs_discovered": 1, 00:14:40.508 "num_base_bdevs_operational": 1, 00:14:40.508 "base_bdevs_list": [ 00:14:40.508 { 00:14:40.508 "name": null, 00:14:40.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.508 "is_configured": false, 00:14:40.508 "data_offset": 0, 00:14:40.508 "data_size": 63488 00:14:40.508 }, 00:14:40.508 { 00:14:40.508 "name": "BaseBdev2", 00:14:40.508 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:40.508 "is_configured": true, 00:14:40.508 "data_offset": 2048, 00:14:40.508 "data_size": 63488 00:14:40.508 } 00:14:40.508 ] 00:14:40.508 }' 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.508 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.076 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:41.076 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.076 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.076 [2024-11-15 11:26:23.759723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.076 [2024-11-15 11:26:23.759996] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:41.076 [2024-11-15 11:26:23.760017] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:41.076 [2024-11-15 11:26:23.760085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.076 [2024-11-15 11:26:23.777843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:41.076 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.076 11:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:41.076 [2024-11-15 11:26:23.780615] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.009 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.009 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.009 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.009 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.009 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.009 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.009 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.010 "name": "raid_bdev1", 00:14:42.010 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:42.010 "strip_size_kb": 0, 00:14:42.010 "state": "online", 00:14:42.010 "raid_level": "raid1", 00:14:42.010 "superblock": true, 00:14:42.010 "num_base_bdevs": 2, 00:14:42.010 "num_base_bdevs_discovered": 2, 00:14:42.010 "num_base_bdevs_operational": 2, 00:14:42.010 "process": { 00:14:42.010 "type": "rebuild", 00:14:42.010 "target": "spare", 00:14:42.010 "progress": { 00:14:42.010 "blocks": 20480, 00:14:42.010 "percent": 32 00:14:42.010 } 00:14:42.010 }, 00:14:42.010 "base_bdevs_list": [ 00:14:42.010 { 00:14:42.010 "name": "spare", 00:14:42.010 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:42.010 "is_configured": true, 00:14:42.010 "data_offset": 2048, 00:14:42.010 "data_size": 63488 00:14:42.010 }, 00:14:42.010 { 00:14:42.010 "name": "BaseBdev2", 00:14:42.010 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:42.010 "is_configured": true, 00:14:42.010 "data_offset": 2048, 00:14:42.010 "data_size": 63488 00:14:42.010 } 00:14:42.010 ] 00:14:42.010 }' 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.010 11:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.010 [2024-11-15 11:26:24.953989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.268 [2024-11-15 11:26:24.991593] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.268 [2024-11-15 11:26:24.991851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.268 [2024-11-15 11:26:24.991892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.268 [2024-11-15 11:26:24.991907] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.268 "name": "raid_bdev1", 00:14:42.268 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:42.268 "strip_size_kb": 0, 00:14:42.268 "state": "online", 00:14:42.268 "raid_level": "raid1", 00:14:42.268 "superblock": true, 00:14:42.268 "num_base_bdevs": 2, 00:14:42.268 "num_base_bdevs_discovered": 1, 00:14:42.268 "num_base_bdevs_operational": 1, 00:14:42.268 "base_bdevs_list": [ 00:14:42.268 { 00:14:42.268 "name": null, 00:14:42.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.268 "is_configured": false, 00:14:42.268 "data_offset": 0, 00:14:42.268 "data_size": 63488 00:14:42.268 }, 00:14:42.268 { 00:14:42.268 "name": "BaseBdev2", 00:14:42.268 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:42.268 "is_configured": true, 00:14:42.268 "data_offset": 2048, 00:14:42.268 "data_size": 63488 00:14:42.268 } 00:14:42.268 ] 00:14:42.268 }' 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.268 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.836 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.836 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.836 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.836 [2024-11-15 11:26:25.551492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.836 [2024-11-15 11:26:25.551658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.836 [2024-11-15 11:26:25.551700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:42.836 [2024-11-15 11:26:25.551716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.836 [2024-11-15 11:26:25.552452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.836 [2024-11-15 11:26:25.552480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.836 [2024-11-15 11:26:25.552649] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:42.836 [2024-11-15 11:26:25.552670] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:42.836 [2024-11-15 11:26:25.552687] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:42.836 [2024-11-15 11:26:25.552716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.836 spare 00:14:42.836 [2024-11-15 11:26:25.570203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:42.836 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.836 11:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:42.836 [2024-11-15 11:26:25.573106] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.800 "name": "raid_bdev1", 00:14:43.800 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:43.800 "strip_size_kb": 0, 00:14:43.800 "state": "online", 00:14:43.800 "raid_level": "raid1", 00:14:43.800 "superblock": true, 00:14:43.800 "num_base_bdevs": 2, 00:14:43.800 "num_base_bdevs_discovered": 2, 00:14:43.800 "num_base_bdevs_operational": 2, 00:14:43.800 "process": { 00:14:43.800 "type": "rebuild", 00:14:43.800 "target": "spare", 00:14:43.800 "progress": { 00:14:43.800 "blocks": 20480, 00:14:43.800 "percent": 32 00:14:43.800 } 00:14:43.800 }, 00:14:43.800 "base_bdevs_list": [ 00:14:43.800 { 00:14:43.800 "name": "spare", 00:14:43.800 "uuid": "64c44fd8-1187-591d-86d9-97aaf90b5bf1", 00:14:43.800 "is_configured": true, 00:14:43.800 "data_offset": 2048, 00:14:43.800 "data_size": 63488 00:14:43.800 }, 00:14:43.800 { 00:14:43.800 "name": "BaseBdev2", 00:14:43.800 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:43.800 "is_configured": true, 00:14:43.800 "data_offset": 2048, 00:14:43.800 "data_size": 63488 00:14:43.800 } 00:14:43.800 ] 00:14:43.800 }' 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.800 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.800 [2024-11-15 11:26:26.734979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.142 [2024-11-15 11:26:26.784567] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:44.143 [2024-11-15 11:26:26.784800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.143 [2024-11-15 11:26:26.784829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.143 [2024-11-15 11:26:26.784846] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.143 "name": "raid_bdev1", 00:14:44.143 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:44.143 "strip_size_kb": 0, 00:14:44.143 "state": "online", 00:14:44.143 "raid_level": "raid1", 00:14:44.143 "superblock": true, 00:14:44.143 "num_base_bdevs": 2, 00:14:44.143 "num_base_bdevs_discovered": 1, 00:14:44.143 "num_base_bdevs_operational": 1, 00:14:44.143 "base_bdevs_list": [ 00:14:44.143 { 00:14:44.143 "name": null, 00:14:44.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.143 "is_configured": false, 00:14:44.143 "data_offset": 0, 00:14:44.143 "data_size": 63488 00:14:44.143 }, 00:14:44.143 { 00:14:44.143 "name": "BaseBdev2", 00:14:44.143 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:44.143 "is_configured": true, 00:14:44.143 "data_offset": 2048, 00:14:44.143 "data_size": 63488 00:14:44.143 } 00:14:44.143 ] 00:14:44.143 }' 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.143 11:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.434 "name": "raid_bdev1", 00:14:44.434 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:44.434 "strip_size_kb": 0, 00:14:44.434 "state": "online", 00:14:44.434 "raid_level": "raid1", 00:14:44.434 "superblock": true, 00:14:44.434 "num_base_bdevs": 2, 00:14:44.434 "num_base_bdevs_discovered": 1, 00:14:44.434 "num_base_bdevs_operational": 1, 00:14:44.434 "base_bdevs_list": [ 00:14:44.434 { 00:14:44.434 "name": null, 00:14:44.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.434 "is_configured": false, 00:14:44.434 "data_offset": 0, 00:14:44.434 "data_size": 63488 00:14:44.434 }, 00:14:44.434 { 00:14:44.434 "name": "BaseBdev2", 00:14:44.434 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:44.434 "is_configured": true, 00:14:44.434 "data_offset": 2048, 00:14:44.434 "data_size": 63488 00:14:44.434 } 00:14:44.434 ] 00:14:44.434 }' 00:14:44.434 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.693 [2024-11-15 11:26:27.489937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.693 [2024-11-15 11:26:27.490198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.693 [2024-11-15 11:26:27.490260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:44.693 [2024-11-15 11:26:27.490285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.693 [2024-11-15 11:26:27.490926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.693 [2024-11-15 11:26:27.490956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.693 [2024-11-15 11:26:27.491054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:44.693 [2024-11-15 11:26:27.491087] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:44.693 [2024-11-15 11:26:27.491098] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:44.693 [2024-11-15 11:26:27.491115] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:44.693 BaseBdev1 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.693 11:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.629 "name": "raid_bdev1", 00:14:45.629 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:45.629 "strip_size_kb": 0, 00:14:45.629 "state": "online", 00:14:45.629 "raid_level": "raid1", 00:14:45.629 "superblock": true, 00:14:45.629 "num_base_bdevs": 2, 00:14:45.629 "num_base_bdevs_discovered": 1, 00:14:45.629 "num_base_bdevs_operational": 1, 00:14:45.629 "base_bdevs_list": [ 00:14:45.629 { 00:14:45.629 "name": null, 00:14:45.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.629 "is_configured": false, 00:14:45.629 "data_offset": 0, 00:14:45.629 "data_size": 63488 00:14:45.629 }, 00:14:45.629 { 00:14:45.629 "name": "BaseBdev2", 00:14:45.629 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:45.629 "is_configured": true, 00:14:45.629 "data_offset": 2048, 00:14:45.629 "data_size": 63488 00:14:45.629 } 00:14:45.629 ] 00:14:45.629 }' 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.629 11:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.196 "name": "raid_bdev1", 00:14:46.196 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:46.196 "strip_size_kb": 0, 00:14:46.196 "state": "online", 00:14:46.196 "raid_level": "raid1", 00:14:46.196 "superblock": true, 00:14:46.196 "num_base_bdevs": 2, 00:14:46.196 "num_base_bdevs_discovered": 1, 00:14:46.196 "num_base_bdevs_operational": 1, 00:14:46.196 "base_bdevs_list": [ 00:14:46.196 { 00:14:46.196 "name": null, 00:14:46.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.196 "is_configured": false, 00:14:46.196 "data_offset": 0, 00:14:46.196 "data_size": 63488 00:14:46.196 }, 00:14:46.196 { 00:14:46.196 "name": "BaseBdev2", 00:14:46.196 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:46.196 "is_configured": true, 00:14:46.196 "data_offset": 2048, 00:14:46.196 "data_size": 63488 00:14:46.196 } 00:14:46.196 ] 00:14:46.196 }' 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.196 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.456 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.456 [2024-11-15 11:26:29.190911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.456 [2024-11-15 11:26:29.191350] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:46.457 [2024-11-15 11:26:29.191379] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:46.457 request: 00:14:46.457 { 00:14:46.457 "base_bdev": "BaseBdev1", 00:14:46.457 "raid_bdev": "raid_bdev1", 00:14:46.457 "method": "bdev_raid_add_base_bdev", 00:14:46.457 "req_id": 1 00:14:46.457 } 00:14:46.457 Got JSON-RPC error response 00:14:46.457 response: 00:14:46.457 { 00:14:46.457 "code": -22, 00:14:46.457 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:46.457 } 00:14:46.457 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:46.457 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:46.457 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:46.457 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:46.457 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:46.457 11:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.394 "name": "raid_bdev1", 00:14:47.394 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:47.394 "strip_size_kb": 0, 00:14:47.394 "state": "online", 00:14:47.394 "raid_level": "raid1", 00:14:47.394 "superblock": true, 00:14:47.394 "num_base_bdevs": 2, 00:14:47.394 "num_base_bdevs_discovered": 1, 00:14:47.394 "num_base_bdevs_operational": 1, 00:14:47.394 "base_bdevs_list": [ 00:14:47.394 { 00:14:47.394 "name": null, 00:14:47.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.394 "is_configured": false, 00:14:47.394 "data_offset": 0, 00:14:47.394 "data_size": 63488 00:14:47.394 }, 00:14:47.394 { 00:14:47.394 "name": "BaseBdev2", 00:14:47.394 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:47.394 "is_configured": true, 00:14:47.394 "data_offset": 2048, 00:14:47.394 "data_size": 63488 00:14:47.394 } 00:14:47.394 ] 00:14:47.394 }' 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.394 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.962 "name": "raid_bdev1", 00:14:47.962 "uuid": "fda12c38-f462-49bf-986a-a0140cd6bc0e", 00:14:47.962 "strip_size_kb": 0, 00:14:47.962 "state": "online", 00:14:47.962 "raid_level": "raid1", 00:14:47.962 "superblock": true, 00:14:47.962 "num_base_bdevs": 2, 00:14:47.962 "num_base_bdevs_discovered": 1, 00:14:47.962 "num_base_bdevs_operational": 1, 00:14:47.962 "base_bdevs_list": [ 00:14:47.962 { 00:14:47.962 "name": null, 00:14:47.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.962 "is_configured": false, 00:14:47.962 "data_offset": 0, 00:14:47.962 "data_size": 63488 00:14:47.962 }, 00:14:47.962 { 00:14:47.962 "name": "BaseBdev2", 00:14:47.962 "uuid": "8390899b-37a2-5650-a131-5a5cec1918a4", 00:14:47.962 "is_configured": true, 00:14:47.962 "data_offset": 2048, 00:14:47.962 "data_size": 63488 00:14:47.962 } 00:14:47.962 ] 00:14:47.962 }' 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76972 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 76972 ']' 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 76972 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:47.962 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76972 00:14:48.221 killing process with pid 76972 00:14:48.221 Received shutdown signal, test time was about 18.145430 seconds 00:14:48.221 00:14:48.221 Latency(us) 00:14:48.221 [2024-11-15T11:26:31.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.221 [2024-11-15T11:26:31.171Z] =================================================================================================================== 00:14:48.221 [2024-11-15T11:26:31.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.221 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:48.221 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:48.221 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76972' 00:14:48.221 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 76972 00:14:48.221 [2024-11-15 11:26:30.927889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.221 11:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 76972 00:14:48.221 [2024-11-15 11:26:30.928066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.221 [2024-11-15 11:26:30.928197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.221 [2024-11-15 11:26:30.928214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:48.221 [2024-11-15 11:26:31.139337] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.600 ************************************ 00:14:49.600 END TEST raid_rebuild_test_sb_io 00:14:49.600 ************************************ 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:49.600 00:14:49.600 real 0m21.437s 00:14:49.600 user 0m28.994s 00:14:49.600 sys 0m2.104s 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.600 11:26:32 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:49.600 11:26:32 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:49.600 11:26:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:49.600 11:26:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:49.600 11:26:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.600 ************************************ 00:14:49.600 START TEST raid_rebuild_test 00:14:49.600 ************************************ 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:49.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77672 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77672 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77672 ']' 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:49.600 11:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.600 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:49.600 Zero copy mechanism will not be used. 00:14:49.600 [2024-11-15 11:26:32.399978] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:49.600 [2024-11-15 11:26:32.400173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77672 ] 00:14:49.859 [2024-11-15 11:26:32.575454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.859 [2024-11-15 11:26:32.712409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.118 [2024-11-15 11:26:32.928710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.118 [2024-11-15 11:26:32.928768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 BaseBdev1_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 [2024-11-15 11:26:33.381216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:50.686 [2024-11-15 11:26:33.381465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.686 [2024-11-15 11:26:33.381546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:50.686 [2024-11-15 11:26:33.381684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.686 [2024-11-15 11:26:33.384706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.686 [2024-11-15 11:26:33.384900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:50.686 BaseBdev1 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 BaseBdev2_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 [2024-11-15 11:26:33.436772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:50.686 [2024-11-15 11:26:33.436866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.686 [2024-11-15 11:26:33.436901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:50.686 [2024-11-15 11:26:33.436920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.686 [2024-11-15 11:26:33.439890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.686 [2024-11-15 11:26:33.439954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:50.686 BaseBdev2 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 BaseBdev3_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 [2024-11-15 11:26:33.500414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:50.686 [2024-11-15 11:26:33.500506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.686 [2024-11-15 11:26:33.500542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:50.686 [2024-11-15 11:26:33.500591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.686 [2024-11-15 11:26:33.503585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.686 [2024-11-15 11:26:33.503649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:50.686 BaseBdev3 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 BaseBdev4_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 [2024-11-15 11:26:33.554853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:50.686 [2024-11-15 11:26:33.555110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.686 [2024-11-15 11:26:33.555170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:50.686 [2024-11-15 11:26:33.555211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.686 [2024-11-15 11:26:33.558272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.686 [2024-11-15 11:26:33.558326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:50.686 BaseBdev4 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 spare_malloc 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 spare_delay 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.686 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.686 [2024-11-15 11:26:33.623340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:50.686 [2024-11-15 11:26:33.623429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.686 [2024-11-15 11:26:33.623457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:50.687 [2024-11-15 11:26:33.623475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.687 [2024-11-15 11:26:33.626613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.687 [2024-11-15 11:26:33.626829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:50.687 spare 00:14:50.687 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.687 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:50.687 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.687 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.946 [2024-11-15 11:26:33.635585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.946 [2024-11-15 11:26:33.638530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.946 [2024-11-15 11:26:33.638788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.946 [2024-11-15 11:26:33.638925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:50.946 [2024-11-15 11:26:33.639050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:50.946 [2024-11-15 11:26:33.639091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:50.946 [2024-11-15 11:26:33.639472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:50.946 [2024-11-15 11:26:33.639727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:50.946 [2024-11-15 11:26:33.639748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:50.946 [2024-11-15 11:26:33.639998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.946 "name": "raid_bdev1", 00:14:50.946 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:14:50.946 "strip_size_kb": 0, 00:14:50.946 "state": "online", 00:14:50.946 "raid_level": "raid1", 00:14:50.946 "superblock": false, 00:14:50.946 "num_base_bdevs": 4, 00:14:50.946 "num_base_bdevs_discovered": 4, 00:14:50.946 "num_base_bdevs_operational": 4, 00:14:50.946 "base_bdevs_list": [ 00:14:50.946 { 00:14:50.946 "name": "BaseBdev1", 00:14:50.946 "uuid": "eedafded-9d7b-5b5c-b451-e1cebbe851d8", 00:14:50.946 "is_configured": true, 00:14:50.946 "data_offset": 0, 00:14:50.946 "data_size": 65536 00:14:50.946 }, 00:14:50.946 { 00:14:50.946 "name": "BaseBdev2", 00:14:50.946 "uuid": "6e11513a-11e7-591c-a808-35829ddaaf91", 00:14:50.946 "is_configured": true, 00:14:50.946 "data_offset": 0, 00:14:50.946 "data_size": 65536 00:14:50.946 }, 00:14:50.946 { 00:14:50.946 "name": "BaseBdev3", 00:14:50.946 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:14:50.946 "is_configured": true, 00:14:50.946 "data_offset": 0, 00:14:50.946 "data_size": 65536 00:14:50.946 }, 00:14:50.946 { 00:14:50.946 "name": "BaseBdev4", 00:14:50.946 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:14:50.946 "is_configured": true, 00:14:50.946 "data_offset": 0, 00:14:50.946 "data_size": 65536 00:14:50.946 } 00:14:50.946 ] 00:14:50.946 }' 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.946 11:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:51.513 [2024-11-15 11:26:34.184651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.513 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:51.772 [2024-11-15 11:26:34.572353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:51.772 /dev/nbd0 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.772 1+0 records in 00:14:51.772 1+0 records out 00:14:51.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622943 s, 6.6 MB/s 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:51.772 11:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:01.784 65536+0 records in 00:15:01.784 65536+0 records out 00:15:01.784 33554432 bytes (34 MB, 32 MiB) copied, 8.34886 s, 4.0 MB/s 00:15:01.784 11:26:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:01.784 11:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.784 11:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:01.784 11:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.784 11:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:01.784 11:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.784 11:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:01.784 [2024-11-15 11:26:43.337908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.784 [2024-11-15 11:26:43.357993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.784 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.784 "name": "raid_bdev1", 00:15:01.784 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:01.784 "strip_size_kb": 0, 00:15:01.784 "state": "online", 00:15:01.784 "raid_level": "raid1", 00:15:01.784 "superblock": false, 00:15:01.784 "num_base_bdevs": 4, 00:15:01.784 "num_base_bdevs_discovered": 3, 00:15:01.784 "num_base_bdevs_operational": 3, 00:15:01.784 "base_bdevs_list": [ 00:15:01.784 { 00:15:01.784 "name": null, 00:15:01.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.784 "is_configured": false, 00:15:01.784 "data_offset": 0, 00:15:01.784 "data_size": 65536 00:15:01.784 }, 00:15:01.784 { 00:15:01.784 "name": "BaseBdev2", 00:15:01.784 "uuid": "6e11513a-11e7-591c-a808-35829ddaaf91", 00:15:01.784 "is_configured": true, 00:15:01.784 "data_offset": 0, 00:15:01.784 "data_size": 65536 00:15:01.784 }, 00:15:01.784 { 00:15:01.784 "name": "BaseBdev3", 00:15:01.784 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:01.784 "is_configured": true, 00:15:01.784 "data_offset": 0, 00:15:01.784 "data_size": 65536 00:15:01.784 }, 00:15:01.784 { 00:15:01.784 "name": "BaseBdev4", 00:15:01.784 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:01.785 "is_configured": true, 00:15:01.785 "data_offset": 0, 00:15:01.785 "data_size": 65536 00:15:01.785 } 00:15:01.785 ] 00:15:01.785 }' 00:15:01.785 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.785 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.785 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.785 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.785 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.785 [2024-11-15 11:26:43.850236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.785 [2024-11-15 11:26:43.864437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:01.785 11:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.785 11:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:01.785 [2024-11-15 11:26:43.867193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.044 "name": "raid_bdev1", 00:15:02.044 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:02.044 "strip_size_kb": 0, 00:15:02.044 "state": "online", 00:15:02.044 "raid_level": "raid1", 00:15:02.044 "superblock": false, 00:15:02.044 "num_base_bdevs": 4, 00:15:02.044 "num_base_bdevs_discovered": 4, 00:15:02.044 "num_base_bdevs_operational": 4, 00:15:02.044 "process": { 00:15:02.044 "type": "rebuild", 00:15:02.044 "target": "spare", 00:15:02.044 "progress": { 00:15:02.044 "blocks": 20480, 00:15:02.044 "percent": 31 00:15:02.044 } 00:15:02.044 }, 00:15:02.044 "base_bdevs_list": [ 00:15:02.044 { 00:15:02.044 "name": "spare", 00:15:02.044 "uuid": "15089df3-3b3f-5e88-bf28-7f138b02c5c1", 00:15:02.044 "is_configured": true, 00:15:02.044 "data_offset": 0, 00:15:02.044 "data_size": 65536 00:15:02.044 }, 00:15:02.044 { 00:15:02.044 "name": "BaseBdev2", 00:15:02.044 "uuid": "6e11513a-11e7-591c-a808-35829ddaaf91", 00:15:02.044 "is_configured": true, 00:15:02.044 "data_offset": 0, 00:15:02.044 "data_size": 65536 00:15:02.044 }, 00:15:02.044 { 00:15:02.044 "name": "BaseBdev3", 00:15:02.044 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:02.044 "is_configured": true, 00:15:02.044 "data_offset": 0, 00:15:02.044 "data_size": 65536 00:15:02.044 }, 00:15:02.044 { 00:15:02.044 "name": "BaseBdev4", 00:15:02.044 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:02.044 "is_configured": true, 00:15:02.044 "data_offset": 0, 00:15:02.044 "data_size": 65536 00:15:02.044 } 00:15:02.044 ] 00:15:02.044 }' 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.044 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.329 11:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.329 [2024-11-15 11:26:45.048624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.329 [2024-11-15 11:26:45.078250] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.329 [2024-11-15 11:26:45.078333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.329 [2024-11-15 11:26:45.078360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.329 [2024-11-15 11:26:45.078376] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.329 "name": "raid_bdev1", 00:15:02.329 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:02.329 "strip_size_kb": 0, 00:15:02.329 "state": "online", 00:15:02.329 "raid_level": "raid1", 00:15:02.329 "superblock": false, 00:15:02.329 "num_base_bdevs": 4, 00:15:02.329 "num_base_bdevs_discovered": 3, 00:15:02.329 "num_base_bdevs_operational": 3, 00:15:02.329 "base_bdevs_list": [ 00:15:02.329 { 00:15:02.329 "name": null, 00:15:02.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.329 "is_configured": false, 00:15:02.329 "data_offset": 0, 00:15:02.329 "data_size": 65536 00:15:02.329 }, 00:15:02.329 { 00:15:02.329 "name": "BaseBdev2", 00:15:02.329 "uuid": "6e11513a-11e7-591c-a808-35829ddaaf91", 00:15:02.329 "is_configured": true, 00:15:02.329 "data_offset": 0, 00:15:02.329 "data_size": 65536 00:15:02.329 }, 00:15:02.329 { 00:15:02.329 "name": "BaseBdev3", 00:15:02.329 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:02.329 "is_configured": true, 00:15:02.329 "data_offset": 0, 00:15:02.329 "data_size": 65536 00:15:02.329 }, 00:15:02.329 { 00:15:02.329 "name": "BaseBdev4", 00:15:02.329 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:02.329 "is_configured": true, 00:15:02.329 "data_offset": 0, 00:15:02.329 "data_size": 65536 00:15:02.329 } 00:15:02.329 ] 00:15:02.329 }' 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.329 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.897 "name": "raid_bdev1", 00:15:02.897 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:02.897 "strip_size_kb": 0, 00:15:02.897 "state": "online", 00:15:02.897 "raid_level": "raid1", 00:15:02.897 "superblock": false, 00:15:02.897 "num_base_bdevs": 4, 00:15:02.897 "num_base_bdevs_discovered": 3, 00:15:02.897 "num_base_bdevs_operational": 3, 00:15:02.897 "base_bdevs_list": [ 00:15:02.897 { 00:15:02.897 "name": null, 00:15:02.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.897 "is_configured": false, 00:15:02.897 "data_offset": 0, 00:15:02.897 "data_size": 65536 00:15:02.897 }, 00:15:02.897 { 00:15:02.897 "name": "BaseBdev2", 00:15:02.897 "uuid": "6e11513a-11e7-591c-a808-35829ddaaf91", 00:15:02.897 "is_configured": true, 00:15:02.897 "data_offset": 0, 00:15:02.897 "data_size": 65536 00:15:02.897 }, 00:15:02.897 { 00:15:02.897 "name": "BaseBdev3", 00:15:02.897 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:02.897 "is_configured": true, 00:15:02.897 "data_offset": 0, 00:15:02.897 "data_size": 65536 00:15:02.897 }, 00:15:02.897 { 00:15:02.897 "name": "BaseBdev4", 00:15:02.897 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:02.897 "is_configured": true, 00:15:02.897 "data_offset": 0, 00:15:02.897 "data_size": 65536 00:15:02.897 } 00:15:02.897 ] 00:15:02.897 }' 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.897 [2024-11-15 11:26:45.780646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.897 [2024-11-15 11:26:45.798779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.897 11:26:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:02.898 [2024-11-15 11:26:45.802460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.273 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.273 "name": "raid_bdev1", 00:15:04.273 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:04.273 "strip_size_kb": 0, 00:15:04.273 "state": "online", 00:15:04.273 "raid_level": "raid1", 00:15:04.273 "superblock": false, 00:15:04.273 "num_base_bdevs": 4, 00:15:04.273 "num_base_bdevs_discovered": 4, 00:15:04.273 "num_base_bdevs_operational": 4, 00:15:04.273 "process": { 00:15:04.273 "type": "rebuild", 00:15:04.273 "target": "spare", 00:15:04.273 "progress": { 00:15:04.273 "blocks": 20480, 00:15:04.273 "percent": 31 00:15:04.273 } 00:15:04.273 }, 00:15:04.273 "base_bdevs_list": [ 00:15:04.273 { 00:15:04.273 "name": "spare", 00:15:04.273 "uuid": "15089df3-3b3f-5e88-bf28-7f138b02c5c1", 00:15:04.273 "is_configured": true, 00:15:04.273 "data_offset": 0, 00:15:04.273 "data_size": 65536 00:15:04.273 }, 00:15:04.273 { 00:15:04.273 "name": "BaseBdev2", 00:15:04.273 "uuid": "6e11513a-11e7-591c-a808-35829ddaaf91", 00:15:04.273 "is_configured": true, 00:15:04.273 "data_offset": 0, 00:15:04.273 "data_size": 65536 00:15:04.273 }, 00:15:04.273 { 00:15:04.273 "name": "BaseBdev3", 00:15:04.273 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:04.274 "is_configured": true, 00:15:04.274 "data_offset": 0, 00:15:04.274 "data_size": 65536 00:15:04.274 }, 00:15:04.274 { 00:15:04.274 "name": "BaseBdev4", 00:15:04.274 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:04.274 "is_configured": true, 00:15:04.274 "data_offset": 0, 00:15:04.274 "data_size": 65536 00:15:04.274 } 00:15:04.274 ] 00:15:04.274 }' 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.274 11:26:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.274 [2024-11-15 11:26:46.979690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.274 [2024-11-15 11:26:47.013141] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.274 "name": "raid_bdev1", 00:15:04.274 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:04.274 "strip_size_kb": 0, 00:15:04.274 "state": "online", 00:15:04.274 "raid_level": "raid1", 00:15:04.274 "superblock": false, 00:15:04.274 "num_base_bdevs": 4, 00:15:04.274 "num_base_bdevs_discovered": 3, 00:15:04.274 "num_base_bdevs_operational": 3, 00:15:04.274 "process": { 00:15:04.274 "type": "rebuild", 00:15:04.274 "target": "spare", 00:15:04.274 "progress": { 00:15:04.274 "blocks": 24576, 00:15:04.274 "percent": 37 00:15:04.274 } 00:15:04.274 }, 00:15:04.274 "base_bdevs_list": [ 00:15:04.274 { 00:15:04.274 "name": "spare", 00:15:04.274 "uuid": "15089df3-3b3f-5e88-bf28-7f138b02c5c1", 00:15:04.274 "is_configured": true, 00:15:04.274 "data_offset": 0, 00:15:04.274 "data_size": 65536 00:15:04.274 }, 00:15:04.274 { 00:15:04.274 "name": null, 00:15:04.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.274 "is_configured": false, 00:15:04.274 "data_offset": 0, 00:15:04.274 "data_size": 65536 00:15:04.274 }, 00:15:04.274 { 00:15:04.274 "name": "BaseBdev3", 00:15:04.274 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:04.274 "is_configured": true, 00:15:04.274 "data_offset": 0, 00:15:04.274 "data_size": 65536 00:15:04.274 }, 00:15:04.274 { 00:15:04.274 "name": "BaseBdev4", 00:15:04.274 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:04.274 "is_configured": true, 00:15:04.274 "data_offset": 0, 00:15:04.274 "data_size": 65536 00:15:04.274 } 00:15:04.274 ] 00:15:04.274 }' 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=484 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.274 11:26:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.533 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.533 "name": "raid_bdev1", 00:15:04.533 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:04.533 "strip_size_kb": 0, 00:15:04.533 "state": "online", 00:15:04.533 "raid_level": "raid1", 00:15:04.533 "superblock": false, 00:15:04.533 "num_base_bdevs": 4, 00:15:04.533 "num_base_bdevs_discovered": 3, 00:15:04.533 "num_base_bdevs_operational": 3, 00:15:04.533 "process": { 00:15:04.533 "type": "rebuild", 00:15:04.533 "target": "spare", 00:15:04.533 "progress": { 00:15:04.533 "blocks": 26624, 00:15:04.533 "percent": 40 00:15:04.533 } 00:15:04.533 }, 00:15:04.533 "base_bdevs_list": [ 00:15:04.533 { 00:15:04.533 "name": "spare", 00:15:04.533 "uuid": "15089df3-3b3f-5e88-bf28-7f138b02c5c1", 00:15:04.533 "is_configured": true, 00:15:04.533 "data_offset": 0, 00:15:04.533 "data_size": 65536 00:15:04.533 }, 00:15:04.533 { 00:15:04.533 "name": null, 00:15:04.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.533 "is_configured": false, 00:15:04.533 "data_offset": 0, 00:15:04.533 "data_size": 65536 00:15:04.533 }, 00:15:04.533 { 00:15:04.533 "name": "BaseBdev3", 00:15:04.533 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:04.533 "is_configured": true, 00:15:04.533 "data_offset": 0, 00:15:04.533 "data_size": 65536 00:15:04.533 }, 00:15:04.533 { 00:15:04.533 "name": "BaseBdev4", 00:15:04.533 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:04.533 "is_configured": true, 00:15:04.533 "data_offset": 0, 00:15:04.533 "data_size": 65536 00:15:04.533 } 00:15:04.533 ] 00:15:04.533 }' 00:15:04.533 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.533 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.533 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.533 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.533 11:26:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.469 "name": "raid_bdev1", 00:15:05.469 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:05.469 "strip_size_kb": 0, 00:15:05.469 "state": "online", 00:15:05.469 "raid_level": "raid1", 00:15:05.469 "superblock": false, 00:15:05.469 "num_base_bdevs": 4, 00:15:05.469 "num_base_bdevs_discovered": 3, 00:15:05.469 "num_base_bdevs_operational": 3, 00:15:05.469 "process": { 00:15:05.469 "type": "rebuild", 00:15:05.469 "target": "spare", 00:15:05.469 "progress": { 00:15:05.469 "blocks": 51200, 00:15:05.469 "percent": 78 00:15:05.469 } 00:15:05.469 }, 00:15:05.469 "base_bdevs_list": [ 00:15:05.469 { 00:15:05.469 "name": "spare", 00:15:05.469 "uuid": "15089df3-3b3f-5e88-bf28-7f138b02c5c1", 00:15:05.469 "is_configured": true, 00:15:05.469 "data_offset": 0, 00:15:05.469 "data_size": 65536 00:15:05.469 }, 00:15:05.469 { 00:15:05.469 "name": null, 00:15:05.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.469 "is_configured": false, 00:15:05.469 "data_offset": 0, 00:15:05.469 "data_size": 65536 00:15:05.469 }, 00:15:05.469 { 00:15:05.469 "name": "BaseBdev3", 00:15:05.469 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:05.469 "is_configured": true, 00:15:05.469 "data_offset": 0, 00:15:05.469 "data_size": 65536 00:15:05.469 }, 00:15:05.469 { 00:15:05.469 "name": "BaseBdev4", 00:15:05.469 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:05.469 "is_configured": true, 00:15:05.469 "data_offset": 0, 00:15:05.469 "data_size": 65536 00:15:05.469 } 00:15:05.469 ] 00:15:05.469 }' 00:15:05.469 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.728 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.728 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.728 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.728 11:26:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.295 [2024-11-15 11:26:49.030717] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:06.295 [2024-11-15 11:26:49.030830] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.295 [2024-11-15 11:26:49.030889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.554 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.813 "name": "raid_bdev1", 00:15:06.813 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:06.813 "strip_size_kb": 0, 00:15:06.813 "state": "online", 00:15:06.813 "raid_level": "raid1", 00:15:06.813 "superblock": false, 00:15:06.813 "num_base_bdevs": 4, 00:15:06.813 "num_base_bdevs_discovered": 3, 00:15:06.813 "num_base_bdevs_operational": 3, 00:15:06.813 "base_bdevs_list": [ 00:15:06.813 { 00:15:06.813 "name": "spare", 00:15:06.813 "uuid": "15089df3-3b3f-5e88-bf28-7f138b02c5c1", 00:15:06.813 "is_configured": true, 00:15:06.813 "data_offset": 0, 00:15:06.813 "data_size": 65536 00:15:06.813 }, 00:15:06.813 { 00:15:06.813 "name": null, 00:15:06.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.813 "is_configured": false, 00:15:06.813 "data_offset": 0, 00:15:06.813 "data_size": 65536 00:15:06.813 }, 00:15:06.813 { 00:15:06.813 "name": "BaseBdev3", 00:15:06.813 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:06.813 "is_configured": true, 00:15:06.813 "data_offset": 0, 00:15:06.813 "data_size": 65536 00:15:06.813 }, 00:15:06.813 { 00:15:06.813 "name": "BaseBdev4", 00:15:06.813 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:06.813 "is_configured": true, 00:15:06.813 "data_offset": 0, 00:15:06.813 "data_size": 65536 00:15:06.813 } 00:15:06.813 ] 00:15:06.813 }' 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.813 "name": "raid_bdev1", 00:15:06.813 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:06.813 "strip_size_kb": 0, 00:15:06.813 "state": "online", 00:15:06.813 "raid_level": "raid1", 00:15:06.813 "superblock": false, 00:15:06.813 "num_base_bdevs": 4, 00:15:06.813 "num_base_bdevs_discovered": 3, 00:15:06.813 "num_base_bdevs_operational": 3, 00:15:06.813 "base_bdevs_list": [ 00:15:06.813 { 00:15:06.813 "name": "spare", 00:15:06.813 "uuid": "15089df3-3b3f-5e88-bf28-7f138b02c5c1", 00:15:06.813 "is_configured": true, 00:15:06.813 "data_offset": 0, 00:15:06.813 "data_size": 65536 00:15:06.813 }, 00:15:06.813 { 00:15:06.813 "name": null, 00:15:06.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.813 "is_configured": false, 00:15:06.813 "data_offset": 0, 00:15:06.813 "data_size": 65536 00:15:06.813 }, 00:15:06.813 { 00:15:06.813 "name": "BaseBdev3", 00:15:06.813 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:06.813 "is_configured": true, 00:15:06.813 "data_offset": 0, 00:15:06.813 "data_size": 65536 00:15:06.813 }, 00:15:06.813 { 00:15:06.813 "name": "BaseBdev4", 00:15:06.813 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:06.813 "is_configured": true, 00:15:06.813 "data_offset": 0, 00:15:06.813 "data_size": 65536 00:15:06.813 } 00:15:06.813 ] 00:15:06.813 }' 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.813 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.072 "name": "raid_bdev1", 00:15:07.072 "uuid": "b8f41191-7337-4359-892a-999638638536", 00:15:07.072 "strip_size_kb": 0, 00:15:07.072 "state": "online", 00:15:07.072 "raid_level": "raid1", 00:15:07.072 "superblock": false, 00:15:07.072 "num_base_bdevs": 4, 00:15:07.072 "num_base_bdevs_discovered": 3, 00:15:07.072 "num_base_bdevs_operational": 3, 00:15:07.072 "base_bdevs_list": [ 00:15:07.072 { 00:15:07.072 "name": "spare", 00:15:07.072 "uuid": "15089df3-3b3f-5e88-bf28-7f138b02c5c1", 00:15:07.072 "is_configured": true, 00:15:07.072 "data_offset": 0, 00:15:07.072 "data_size": 65536 00:15:07.072 }, 00:15:07.072 { 00:15:07.072 "name": null, 00:15:07.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.072 "is_configured": false, 00:15:07.072 "data_offset": 0, 00:15:07.072 "data_size": 65536 00:15:07.072 }, 00:15:07.072 { 00:15:07.072 "name": "BaseBdev3", 00:15:07.072 "uuid": "1f4424ff-d5d0-5a34-a381-0897272db721", 00:15:07.072 "is_configured": true, 00:15:07.072 "data_offset": 0, 00:15:07.072 "data_size": 65536 00:15:07.072 }, 00:15:07.072 { 00:15:07.072 "name": "BaseBdev4", 00:15:07.072 "uuid": "632ac965-ea67-5d2f-a909-5b060f60ff6c", 00:15:07.072 "is_configured": true, 00:15:07.072 "data_offset": 0, 00:15:07.072 "data_size": 65536 00:15:07.072 } 00:15:07.072 ] 00:15:07.072 }' 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.072 11:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.639 [2024-11-15 11:26:50.318577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.639 [2024-11-15 11:26:50.318638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.639 [2024-11-15 11:26:50.318747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.639 [2024-11-15 11:26:50.318897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.639 [2024-11-15 11:26:50.318914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.639 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:07.897 /dev/nbd0 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.897 1+0 records in 00:15:07.897 1+0 records out 00:15:07.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285898 s, 14.3 MB/s 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.897 11:26:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:08.156 /dev/nbd1 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.156 1+0 records in 00:15:08.156 1+0 records out 00:15:08.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381487 s, 10.7 MB/s 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.156 11:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:08.415 11:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:08.415 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.415 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.415 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.415 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:08.415 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.415 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.673 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77672 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77672 ']' 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77672 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:08.932 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77672 00:15:08.932 killing process with pid 77672 00:15:08.932 Received shutdown signal, test time was about 60.000000 seconds 00:15:08.932 00:15:08.932 Latency(us) 00:15:08.932 [2024-11-15T11:26:51.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.932 [2024-11-15T11:26:51.882Z] =================================================================================================================== 00:15:08.932 [2024-11-15T11:26:51.882Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.933 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:08.933 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:08.933 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77672' 00:15:08.933 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77672 00:15:08.933 11:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77672 00:15:08.933 [2024-11-15 11:26:51.868685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.501 [2024-11-15 11:26:52.272130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:10.881 00:15:10.881 real 0m21.101s 00:15:10.881 user 0m23.721s 00:15:10.881 sys 0m3.832s 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.881 ************************************ 00:15:10.881 END TEST raid_rebuild_test 00:15:10.881 ************************************ 00:15:10.881 11:26:53 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:10.881 11:26:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:10.881 11:26:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:10.881 11:26:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.881 ************************************ 00:15:10.881 START TEST raid_rebuild_test_sb 00:15:10.881 ************************************ 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78147 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78147 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78147 ']' 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:10.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:10.881 11:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.881 [2024-11-15 11:26:53.582670] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:10.881 [2024-11-15 11:26:53.582887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78147 ] 00:15:10.881 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:10.881 Zero copy mechanism will not be used. 00:15:10.881 [2024-11-15 11:26:53.769285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.141 [2024-11-15 11:26:53.910610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.401 [2024-11-15 11:26:54.127918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.401 [2024-11-15 11:26:54.127965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.660 BaseBdev1_malloc 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.660 [2024-11-15 11:26:54.567769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:11.660 [2024-11-15 11:26:54.567874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.660 [2024-11-15 11:26:54.567906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:11.660 [2024-11-15 11:26:54.567925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.660 [2024-11-15 11:26:54.570853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.660 [2024-11-15 11:26:54.570914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:11.660 BaseBdev1 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.660 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 BaseBdev2_malloc 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 [2024-11-15 11:26:54.625741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:11.920 [2024-11-15 11:26:54.625864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.920 [2024-11-15 11:26:54.625898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:11.920 [2024-11-15 11:26:54.625917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.920 [2024-11-15 11:26:54.629115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.920 [2024-11-15 11:26:54.629223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:11.920 BaseBdev2 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 BaseBdev3_malloc 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 [2024-11-15 11:26:54.698787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:11.920 [2024-11-15 11:26:54.698889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.920 [2024-11-15 11:26:54.698922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:11.920 [2024-11-15 11:26:54.698941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.920 [2024-11-15 11:26:54.701892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.920 [2024-11-15 11:26:54.701956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:11.920 BaseBdev3 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 BaseBdev4_malloc 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 [2024-11-15 11:26:54.756964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:11.920 [2024-11-15 11:26:54.757060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.920 [2024-11-15 11:26:54.757093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:11.920 [2024-11-15 11:26:54.757112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.920 [2024-11-15 11:26:54.760026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.920 [2024-11-15 11:26:54.760104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:11.920 BaseBdev4 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 spare_malloc 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 spare_delay 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 [2024-11-15 11:26:54.829889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:11.920 [2024-11-15 11:26:54.829971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.920 [2024-11-15 11:26:54.829997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:11.920 [2024-11-15 11:26:54.830014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.920 [2024-11-15 11:26:54.833217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.920 [2024-11-15 11:26:54.833276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:11.920 spare 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.920 [2024-11-15 11:26:54.841970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.920 [2024-11-15 11:26:54.844668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.920 [2024-11-15 11:26:54.844779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.920 [2024-11-15 11:26:54.844854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:11.920 [2024-11-15 11:26:54.845133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:11.920 [2024-11-15 11:26:54.845206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:11.920 [2024-11-15 11:26:54.845553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:11.920 [2024-11-15 11:26:54.845791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:11.920 [2024-11-15 11:26:54.845815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:11.920 [2024-11-15 11:26:54.846052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:11.920 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.921 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.180 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.180 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.180 "name": "raid_bdev1", 00:15:12.180 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:12.180 "strip_size_kb": 0, 00:15:12.180 "state": "online", 00:15:12.180 "raid_level": "raid1", 00:15:12.180 "superblock": true, 00:15:12.180 "num_base_bdevs": 4, 00:15:12.180 "num_base_bdevs_discovered": 4, 00:15:12.180 "num_base_bdevs_operational": 4, 00:15:12.180 "base_bdevs_list": [ 00:15:12.180 { 00:15:12.180 "name": "BaseBdev1", 00:15:12.180 "uuid": "3c490513-5ea6-577b-91de-6d17307cf58c", 00:15:12.180 "is_configured": true, 00:15:12.180 "data_offset": 2048, 00:15:12.180 "data_size": 63488 00:15:12.180 }, 00:15:12.180 { 00:15:12.180 "name": "BaseBdev2", 00:15:12.180 "uuid": "31b058ec-591c-5ad4-8287-5c976b08a379", 00:15:12.180 "is_configured": true, 00:15:12.180 "data_offset": 2048, 00:15:12.180 "data_size": 63488 00:15:12.180 }, 00:15:12.180 { 00:15:12.180 "name": "BaseBdev3", 00:15:12.180 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:12.180 "is_configured": true, 00:15:12.180 "data_offset": 2048, 00:15:12.180 "data_size": 63488 00:15:12.180 }, 00:15:12.180 { 00:15:12.180 "name": "BaseBdev4", 00:15:12.180 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:12.180 "is_configured": true, 00:15:12.180 "data_offset": 2048, 00:15:12.180 "data_size": 63488 00:15:12.180 } 00:15:12.180 ] 00:15:12.180 }' 00:15:12.180 11:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.180 11:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.440 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:12.440 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.440 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.440 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.440 [2024-11-15 11:26:55.382690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.699 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:12.958 [2024-11-15 11:26:55.770414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:12.958 /dev/nbd0 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.958 1+0 records in 00:15:12.958 1+0 records out 00:15:12.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804767 s, 5.1 MB/s 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:12.958 11:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:21.076 63488+0 records in 00:15:21.076 63488+0 records out 00:15:21.076 32505856 bytes (33 MB, 31 MiB) copied, 8.00424 s, 4.1 MB/s 00:15:21.076 11:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:21.076 11:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.076 11:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:21.076 11:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.076 11:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:21.076 11:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.076 11:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:21.335 [2024-11-15 11:27:04.140863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.335 [2024-11-15 11:27:04.165008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:21.335 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.336 "name": "raid_bdev1", 00:15:21.336 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:21.336 "strip_size_kb": 0, 00:15:21.336 "state": "online", 00:15:21.336 "raid_level": "raid1", 00:15:21.336 "superblock": true, 00:15:21.336 "num_base_bdevs": 4, 00:15:21.336 "num_base_bdevs_discovered": 3, 00:15:21.336 "num_base_bdevs_operational": 3, 00:15:21.336 "base_bdevs_list": [ 00:15:21.336 { 00:15:21.336 "name": null, 00:15:21.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.336 "is_configured": false, 00:15:21.336 "data_offset": 0, 00:15:21.336 "data_size": 63488 00:15:21.336 }, 00:15:21.336 { 00:15:21.336 "name": "BaseBdev2", 00:15:21.336 "uuid": "31b058ec-591c-5ad4-8287-5c976b08a379", 00:15:21.336 "is_configured": true, 00:15:21.336 "data_offset": 2048, 00:15:21.336 "data_size": 63488 00:15:21.336 }, 00:15:21.336 { 00:15:21.336 "name": "BaseBdev3", 00:15:21.336 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:21.336 "is_configured": true, 00:15:21.336 "data_offset": 2048, 00:15:21.336 "data_size": 63488 00:15:21.336 }, 00:15:21.336 { 00:15:21.336 "name": "BaseBdev4", 00:15:21.336 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:21.336 "is_configured": true, 00:15:21.336 "data_offset": 2048, 00:15:21.336 "data_size": 63488 00:15:21.336 } 00:15:21.336 ] 00:15:21.336 }' 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.336 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.906 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.906 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.906 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.906 [2024-11-15 11:27:04.665136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.906 [2024-11-15 11:27:04.679218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:21.906 11:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.906 11:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:21.906 [2024-11-15 11:27:04.681806] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.844 "name": "raid_bdev1", 00:15:22.844 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:22.844 "strip_size_kb": 0, 00:15:22.844 "state": "online", 00:15:22.844 "raid_level": "raid1", 00:15:22.844 "superblock": true, 00:15:22.844 "num_base_bdevs": 4, 00:15:22.844 "num_base_bdevs_discovered": 4, 00:15:22.844 "num_base_bdevs_operational": 4, 00:15:22.844 "process": { 00:15:22.844 "type": "rebuild", 00:15:22.844 "target": "spare", 00:15:22.844 "progress": { 00:15:22.844 "blocks": 20480, 00:15:22.844 "percent": 32 00:15:22.844 } 00:15:22.844 }, 00:15:22.844 "base_bdevs_list": [ 00:15:22.844 { 00:15:22.844 "name": "spare", 00:15:22.844 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:22.844 "is_configured": true, 00:15:22.844 "data_offset": 2048, 00:15:22.844 "data_size": 63488 00:15:22.844 }, 00:15:22.844 { 00:15:22.844 "name": "BaseBdev2", 00:15:22.844 "uuid": "31b058ec-591c-5ad4-8287-5c976b08a379", 00:15:22.844 "is_configured": true, 00:15:22.844 "data_offset": 2048, 00:15:22.844 "data_size": 63488 00:15:22.844 }, 00:15:22.844 { 00:15:22.844 "name": "BaseBdev3", 00:15:22.844 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:22.844 "is_configured": true, 00:15:22.844 "data_offset": 2048, 00:15:22.844 "data_size": 63488 00:15:22.844 }, 00:15:22.844 { 00:15:22.844 "name": "BaseBdev4", 00:15:22.844 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:22.844 "is_configured": true, 00:15:22.844 "data_offset": 2048, 00:15:22.844 "data_size": 63488 00:15:22.844 } 00:15:22.844 ] 00:15:22.844 }' 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.844 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.103 [2024-11-15 11:27:05.839058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.103 [2024-11-15 11:27:05.892740] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.103 [2024-11-15 11:27:05.892831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.103 [2024-11-15 11:27:05.892855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.103 [2024-11-15 11:27:05.892870] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.103 "name": "raid_bdev1", 00:15:23.103 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:23.103 "strip_size_kb": 0, 00:15:23.103 "state": "online", 00:15:23.103 "raid_level": "raid1", 00:15:23.103 "superblock": true, 00:15:23.103 "num_base_bdevs": 4, 00:15:23.103 "num_base_bdevs_discovered": 3, 00:15:23.103 "num_base_bdevs_operational": 3, 00:15:23.103 "base_bdevs_list": [ 00:15:23.103 { 00:15:23.103 "name": null, 00:15:23.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.103 "is_configured": false, 00:15:23.103 "data_offset": 0, 00:15:23.103 "data_size": 63488 00:15:23.103 }, 00:15:23.103 { 00:15:23.103 "name": "BaseBdev2", 00:15:23.103 "uuid": "31b058ec-591c-5ad4-8287-5c976b08a379", 00:15:23.103 "is_configured": true, 00:15:23.103 "data_offset": 2048, 00:15:23.103 "data_size": 63488 00:15:23.103 }, 00:15:23.103 { 00:15:23.103 "name": "BaseBdev3", 00:15:23.103 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:23.103 "is_configured": true, 00:15:23.103 "data_offset": 2048, 00:15:23.103 "data_size": 63488 00:15:23.103 }, 00:15:23.103 { 00:15:23.103 "name": "BaseBdev4", 00:15:23.103 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:23.103 "is_configured": true, 00:15:23.103 "data_offset": 2048, 00:15:23.103 "data_size": 63488 00:15:23.103 } 00:15:23.103 ] 00:15:23.103 }' 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.103 11:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.678 "name": "raid_bdev1", 00:15:23.678 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:23.678 "strip_size_kb": 0, 00:15:23.678 "state": "online", 00:15:23.678 "raid_level": "raid1", 00:15:23.678 "superblock": true, 00:15:23.678 "num_base_bdevs": 4, 00:15:23.678 "num_base_bdevs_discovered": 3, 00:15:23.678 "num_base_bdevs_operational": 3, 00:15:23.678 "base_bdevs_list": [ 00:15:23.678 { 00:15:23.678 "name": null, 00:15:23.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.678 "is_configured": false, 00:15:23.678 "data_offset": 0, 00:15:23.678 "data_size": 63488 00:15:23.678 }, 00:15:23.678 { 00:15:23.678 "name": "BaseBdev2", 00:15:23.678 "uuid": "31b058ec-591c-5ad4-8287-5c976b08a379", 00:15:23.678 "is_configured": true, 00:15:23.678 "data_offset": 2048, 00:15:23.678 "data_size": 63488 00:15:23.678 }, 00:15:23.678 { 00:15:23.678 "name": "BaseBdev3", 00:15:23.678 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:23.678 "is_configured": true, 00:15:23.678 "data_offset": 2048, 00:15:23.678 "data_size": 63488 00:15:23.678 }, 00:15:23.678 { 00:15:23.678 "name": "BaseBdev4", 00:15:23.678 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:23.678 "is_configured": true, 00:15:23.678 "data_offset": 2048, 00:15:23.678 "data_size": 63488 00:15:23.678 } 00:15:23.678 ] 00:15:23.678 }' 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 [2024-11-15 11:27:06.590867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.678 [2024-11-15 11:27:06.603885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.678 11:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:23.678 [2024-11-15 11:27:06.606856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.056 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.056 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.056 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.056 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.056 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.056 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.056 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.056 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.057 "name": "raid_bdev1", 00:15:25.057 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:25.057 "strip_size_kb": 0, 00:15:25.057 "state": "online", 00:15:25.057 "raid_level": "raid1", 00:15:25.057 "superblock": true, 00:15:25.057 "num_base_bdevs": 4, 00:15:25.057 "num_base_bdevs_discovered": 4, 00:15:25.057 "num_base_bdevs_operational": 4, 00:15:25.057 "process": { 00:15:25.057 "type": "rebuild", 00:15:25.057 "target": "spare", 00:15:25.057 "progress": { 00:15:25.057 "blocks": 20480, 00:15:25.057 "percent": 32 00:15:25.057 } 00:15:25.057 }, 00:15:25.057 "base_bdevs_list": [ 00:15:25.057 { 00:15:25.057 "name": "spare", 00:15:25.057 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:25.057 "is_configured": true, 00:15:25.057 "data_offset": 2048, 00:15:25.057 "data_size": 63488 00:15:25.057 }, 00:15:25.057 { 00:15:25.057 "name": "BaseBdev2", 00:15:25.057 "uuid": "31b058ec-591c-5ad4-8287-5c976b08a379", 00:15:25.057 "is_configured": true, 00:15:25.057 "data_offset": 2048, 00:15:25.057 "data_size": 63488 00:15:25.057 }, 00:15:25.057 { 00:15:25.057 "name": "BaseBdev3", 00:15:25.057 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:25.057 "is_configured": true, 00:15:25.057 "data_offset": 2048, 00:15:25.057 "data_size": 63488 00:15:25.057 }, 00:15:25.057 { 00:15:25.057 "name": "BaseBdev4", 00:15:25.057 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:25.057 "is_configured": true, 00:15:25.057 "data_offset": 2048, 00:15:25.057 "data_size": 63488 00:15:25.057 } 00:15:25.057 ] 00:15:25.057 }' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:25.057 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.057 [2024-11-15 11:27:07.780400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.057 [2024-11-15 11:27:07.917146] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.057 "name": "raid_bdev1", 00:15:25.057 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:25.057 "strip_size_kb": 0, 00:15:25.057 "state": "online", 00:15:25.057 "raid_level": "raid1", 00:15:25.057 "superblock": true, 00:15:25.057 "num_base_bdevs": 4, 00:15:25.057 "num_base_bdevs_discovered": 3, 00:15:25.057 "num_base_bdevs_operational": 3, 00:15:25.057 "process": { 00:15:25.057 "type": "rebuild", 00:15:25.057 "target": "spare", 00:15:25.057 "progress": { 00:15:25.057 "blocks": 24576, 00:15:25.057 "percent": 38 00:15:25.057 } 00:15:25.057 }, 00:15:25.057 "base_bdevs_list": [ 00:15:25.057 { 00:15:25.057 "name": "spare", 00:15:25.057 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:25.057 "is_configured": true, 00:15:25.057 "data_offset": 2048, 00:15:25.057 "data_size": 63488 00:15:25.057 }, 00:15:25.057 { 00:15:25.057 "name": null, 00:15:25.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.057 "is_configured": false, 00:15:25.057 "data_offset": 0, 00:15:25.057 "data_size": 63488 00:15:25.057 }, 00:15:25.057 { 00:15:25.057 "name": "BaseBdev3", 00:15:25.057 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:25.057 "is_configured": true, 00:15:25.057 "data_offset": 2048, 00:15:25.057 "data_size": 63488 00:15:25.057 }, 00:15:25.057 { 00:15:25.057 "name": "BaseBdev4", 00:15:25.057 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:25.057 "is_configured": true, 00:15:25.057 "data_offset": 2048, 00:15:25.057 "data_size": 63488 00:15:25.057 } 00:15:25.057 ] 00:15:25.057 }' 00:15:25.057 11:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=505 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.316 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.316 "name": "raid_bdev1", 00:15:25.316 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:25.316 "strip_size_kb": 0, 00:15:25.316 "state": "online", 00:15:25.316 "raid_level": "raid1", 00:15:25.316 "superblock": true, 00:15:25.316 "num_base_bdevs": 4, 00:15:25.317 "num_base_bdevs_discovered": 3, 00:15:25.317 "num_base_bdevs_operational": 3, 00:15:25.317 "process": { 00:15:25.317 "type": "rebuild", 00:15:25.317 "target": "spare", 00:15:25.317 "progress": { 00:15:25.317 "blocks": 26624, 00:15:25.317 "percent": 41 00:15:25.317 } 00:15:25.317 }, 00:15:25.317 "base_bdevs_list": [ 00:15:25.317 { 00:15:25.317 "name": "spare", 00:15:25.317 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:25.317 "is_configured": true, 00:15:25.317 "data_offset": 2048, 00:15:25.317 "data_size": 63488 00:15:25.317 }, 00:15:25.317 { 00:15:25.317 "name": null, 00:15:25.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.317 "is_configured": false, 00:15:25.317 "data_offset": 0, 00:15:25.317 "data_size": 63488 00:15:25.317 }, 00:15:25.317 { 00:15:25.317 "name": "BaseBdev3", 00:15:25.317 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:25.317 "is_configured": true, 00:15:25.317 "data_offset": 2048, 00:15:25.317 "data_size": 63488 00:15:25.317 }, 00:15:25.317 { 00:15:25.317 "name": "BaseBdev4", 00:15:25.317 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:25.317 "is_configured": true, 00:15:25.317 "data_offset": 2048, 00:15:25.317 "data_size": 63488 00:15:25.317 } 00:15:25.317 ] 00:15:25.317 }' 00:15:25.317 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.317 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.317 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.317 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.317 11:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.695 "name": "raid_bdev1", 00:15:26.695 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:26.695 "strip_size_kb": 0, 00:15:26.695 "state": "online", 00:15:26.695 "raid_level": "raid1", 00:15:26.695 "superblock": true, 00:15:26.695 "num_base_bdevs": 4, 00:15:26.695 "num_base_bdevs_discovered": 3, 00:15:26.695 "num_base_bdevs_operational": 3, 00:15:26.695 "process": { 00:15:26.695 "type": "rebuild", 00:15:26.695 "target": "spare", 00:15:26.695 "progress": { 00:15:26.695 "blocks": 51200, 00:15:26.695 "percent": 80 00:15:26.695 } 00:15:26.695 }, 00:15:26.695 "base_bdevs_list": [ 00:15:26.695 { 00:15:26.695 "name": "spare", 00:15:26.695 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:26.695 "is_configured": true, 00:15:26.695 "data_offset": 2048, 00:15:26.695 "data_size": 63488 00:15:26.695 }, 00:15:26.695 { 00:15:26.695 "name": null, 00:15:26.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.695 "is_configured": false, 00:15:26.695 "data_offset": 0, 00:15:26.695 "data_size": 63488 00:15:26.695 }, 00:15:26.695 { 00:15:26.695 "name": "BaseBdev3", 00:15:26.695 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:26.695 "is_configured": true, 00:15:26.695 "data_offset": 2048, 00:15:26.695 "data_size": 63488 00:15:26.695 }, 00:15:26.695 { 00:15:26.695 "name": "BaseBdev4", 00:15:26.695 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:26.695 "is_configured": true, 00:15:26.695 "data_offset": 2048, 00:15:26.695 "data_size": 63488 00:15:26.695 } 00:15:26.695 ] 00:15:26.695 }' 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.695 11:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.954 [2024-11-15 11:27:09.833870] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:26.954 [2024-11-15 11:27:09.833991] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:26.954 [2024-11-15 11:27:09.834268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.522 "name": "raid_bdev1", 00:15:27.522 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:27.522 "strip_size_kb": 0, 00:15:27.522 "state": "online", 00:15:27.522 "raid_level": "raid1", 00:15:27.522 "superblock": true, 00:15:27.522 "num_base_bdevs": 4, 00:15:27.522 "num_base_bdevs_discovered": 3, 00:15:27.522 "num_base_bdevs_operational": 3, 00:15:27.522 "base_bdevs_list": [ 00:15:27.522 { 00:15:27.522 "name": "spare", 00:15:27.522 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:27.522 "is_configured": true, 00:15:27.522 "data_offset": 2048, 00:15:27.522 "data_size": 63488 00:15:27.522 }, 00:15:27.522 { 00:15:27.522 "name": null, 00:15:27.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.522 "is_configured": false, 00:15:27.522 "data_offset": 0, 00:15:27.522 "data_size": 63488 00:15:27.522 }, 00:15:27.522 { 00:15:27.522 "name": "BaseBdev3", 00:15:27.522 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:27.522 "is_configured": true, 00:15:27.522 "data_offset": 2048, 00:15:27.522 "data_size": 63488 00:15:27.522 }, 00:15:27.522 { 00:15:27.522 "name": "BaseBdev4", 00:15:27.522 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:27.522 "is_configured": true, 00:15:27.522 "data_offset": 2048, 00:15:27.522 "data_size": 63488 00:15:27.522 } 00:15:27.522 ] 00:15:27.522 }' 00:15:27.522 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.781 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.781 "name": "raid_bdev1", 00:15:27.781 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:27.781 "strip_size_kb": 0, 00:15:27.781 "state": "online", 00:15:27.782 "raid_level": "raid1", 00:15:27.782 "superblock": true, 00:15:27.782 "num_base_bdevs": 4, 00:15:27.782 "num_base_bdevs_discovered": 3, 00:15:27.782 "num_base_bdevs_operational": 3, 00:15:27.782 "base_bdevs_list": [ 00:15:27.782 { 00:15:27.782 "name": "spare", 00:15:27.782 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:27.782 "is_configured": true, 00:15:27.782 "data_offset": 2048, 00:15:27.782 "data_size": 63488 00:15:27.782 }, 00:15:27.782 { 00:15:27.782 "name": null, 00:15:27.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.782 "is_configured": false, 00:15:27.782 "data_offset": 0, 00:15:27.782 "data_size": 63488 00:15:27.782 }, 00:15:27.782 { 00:15:27.782 "name": "BaseBdev3", 00:15:27.782 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:27.782 "is_configured": true, 00:15:27.782 "data_offset": 2048, 00:15:27.782 "data_size": 63488 00:15:27.782 }, 00:15:27.782 { 00:15:27.782 "name": "BaseBdev4", 00:15:27.782 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:27.782 "is_configured": true, 00:15:27.782 "data_offset": 2048, 00:15:27.782 "data_size": 63488 00:15:27.782 } 00:15:27.782 ] 00:15:27.782 }' 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.782 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.041 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.041 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.041 "name": "raid_bdev1", 00:15:28.041 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:28.041 "strip_size_kb": 0, 00:15:28.041 "state": "online", 00:15:28.041 "raid_level": "raid1", 00:15:28.041 "superblock": true, 00:15:28.041 "num_base_bdevs": 4, 00:15:28.041 "num_base_bdevs_discovered": 3, 00:15:28.041 "num_base_bdevs_operational": 3, 00:15:28.041 "base_bdevs_list": [ 00:15:28.041 { 00:15:28.041 "name": "spare", 00:15:28.041 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:28.041 "is_configured": true, 00:15:28.041 "data_offset": 2048, 00:15:28.041 "data_size": 63488 00:15:28.041 }, 00:15:28.041 { 00:15:28.041 "name": null, 00:15:28.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.041 "is_configured": false, 00:15:28.041 "data_offset": 0, 00:15:28.041 "data_size": 63488 00:15:28.041 }, 00:15:28.041 { 00:15:28.041 "name": "BaseBdev3", 00:15:28.041 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:28.041 "is_configured": true, 00:15:28.041 "data_offset": 2048, 00:15:28.041 "data_size": 63488 00:15:28.041 }, 00:15:28.041 { 00:15:28.041 "name": "BaseBdev4", 00:15:28.041 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:28.041 "is_configured": true, 00:15:28.041 "data_offset": 2048, 00:15:28.041 "data_size": 63488 00:15:28.041 } 00:15:28.041 ] 00:15:28.041 }' 00:15:28.041 11:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.041 11:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.309 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:28.309 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.309 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.309 [2024-11-15 11:27:11.246684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.309 [2024-11-15 11:27:11.246901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.309 [2024-11-15 11:27:11.247047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.309 [2024-11-15 11:27:11.247158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.309 [2024-11-15 11:27:11.247175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:28.309 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.309 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.309 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:28.309 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.309 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.567 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:28.826 /dev/nbd0 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.826 1+0 records in 00:15:28.826 1+0 records out 00:15:28.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423623 s, 9.7 MB/s 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.826 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:29.085 /dev/nbd1 00:15:29.085 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:29.085 11:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:29.085 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:29.085 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:29.085 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:29.085 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:29.085 11:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.085 1+0 records in 00:15:29.085 1+0 records out 00:15:29.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456363 s, 9.0 MB/s 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.085 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:29.344 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:29.344 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.344 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:29.344 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.344 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:29.344 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.344 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.911 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:30.170 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 [2024-11-15 11:27:12.893206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:30.171 [2024-11-15 11:27:12.893296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.171 [2024-11-15 11:27:12.893328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:30.171 [2024-11-15 11:27:12.893344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.171 [2024-11-15 11:27:12.896354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.171 [2024-11-15 11:27:12.896398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:30.171 [2024-11-15 11:27:12.896509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:30.171 [2024-11-15 11:27:12.896600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.171 [2024-11-15 11:27:12.896780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.171 [2024-11-15 11:27:12.896942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:30.171 spare 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.171 11:27:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 [2024-11-15 11:27:12.997047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:30.171 [2024-11-15 11:27:12.997073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:30.171 [2024-11-15 11:27:12.997419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:30.171 [2024-11-15 11:27:12.997674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:30.171 [2024-11-15 11:27:12.997695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:30.171 [2024-11-15 11:27:12.997876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.171 "name": "raid_bdev1", 00:15:30.171 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:30.171 "strip_size_kb": 0, 00:15:30.171 "state": "online", 00:15:30.171 "raid_level": "raid1", 00:15:30.171 "superblock": true, 00:15:30.171 "num_base_bdevs": 4, 00:15:30.171 "num_base_bdevs_discovered": 3, 00:15:30.171 "num_base_bdevs_operational": 3, 00:15:30.171 "base_bdevs_list": [ 00:15:30.171 { 00:15:30.171 "name": "spare", 00:15:30.171 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:30.171 "is_configured": true, 00:15:30.171 "data_offset": 2048, 00:15:30.171 "data_size": 63488 00:15:30.171 }, 00:15:30.171 { 00:15:30.171 "name": null, 00:15:30.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.171 "is_configured": false, 00:15:30.171 "data_offset": 2048, 00:15:30.171 "data_size": 63488 00:15:30.171 }, 00:15:30.171 { 00:15:30.171 "name": "BaseBdev3", 00:15:30.171 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:30.171 "is_configured": true, 00:15:30.171 "data_offset": 2048, 00:15:30.171 "data_size": 63488 00:15:30.171 }, 00:15:30.171 { 00:15:30.171 "name": "BaseBdev4", 00:15:30.171 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:30.171 "is_configured": true, 00:15:30.171 "data_offset": 2048, 00:15:30.171 "data_size": 63488 00:15:30.171 } 00:15:30.171 ] 00:15:30.171 }' 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.171 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.790 "name": "raid_bdev1", 00:15:30.790 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:30.790 "strip_size_kb": 0, 00:15:30.790 "state": "online", 00:15:30.790 "raid_level": "raid1", 00:15:30.790 "superblock": true, 00:15:30.790 "num_base_bdevs": 4, 00:15:30.790 "num_base_bdevs_discovered": 3, 00:15:30.790 "num_base_bdevs_operational": 3, 00:15:30.790 "base_bdevs_list": [ 00:15:30.790 { 00:15:30.790 "name": "spare", 00:15:30.790 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:30.790 "is_configured": true, 00:15:30.790 "data_offset": 2048, 00:15:30.790 "data_size": 63488 00:15:30.790 }, 00:15:30.790 { 00:15:30.790 "name": null, 00:15:30.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.790 "is_configured": false, 00:15:30.790 "data_offset": 2048, 00:15:30.790 "data_size": 63488 00:15:30.790 }, 00:15:30.790 { 00:15:30.790 "name": "BaseBdev3", 00:15:30.790 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:30.790 "is_configured": true, 00:15:30.790 "data_offset": 2048, 00:15:30.790 "data_size": 63488 00:15:30.790 }, 00:15:30.790 { 00:15:30.790 "name": "BaseBdev4", 00:15:30.790 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:30.790 "is_configured": true, 00:15:30.790 "data_offset": 2048, 00:15:30.790 "data_size": 63488 00:15:30.790 } 00:15:30.790 ] 00:15:30.790 }' 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.790 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 [2024-11-15 11:27:13.722116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.050 "name": "raid_bdev1", 00:15:31.050 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:31.050 "strip_size_kb": 0, 00:15:31.050 "state": "online", 00:15:31.050 "raid_level": "raid1", 00:15:31.050 "superblock": true, 00:15:31.050 "num_base_bdevs": 4, 00:15:31.050 "num_base_bdevs_discovered": 2, 00:15:31.050 "num_base_bdevs_operational": 2, 00:15:31.050 "base_bdevs_list": [ 00:15:31.050 { 00:15:31.050 "name": null, 00:15:31.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.050 "is_configured": false, 00:15:31.050 "data_offset": 0, 00:15:31.050 "data_size": 63488 00:15:31.050 }, 00:15:31.050 { 00:15:31.050 "name": null, 00:15:31.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.050 "is_configured": false, 00:15:31.050 "data_offset": 2048, 00:15:31.050 "data_size": 63488 00:15:31.050 }, 00:15:31.050 { 00:15:31.050 "name": "BaseBdev3", 00:15:31.050 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:31.050 "is_configured": true, 00:15:31.050 "data_offset": 2048, 00:15:31.050 "data_size": 63488 00:15:31.050 }, 00:15:31.050 { 00:15:31.050 "name": "BaseBdev4", 00:15:31.050 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:31.050 "is_configured": true, 00:15:31.050 "data_offset": 2048, 00:15:31.050 "data_size": 63488 00:15:31.050 } 00:15:31.050 ] 00:15:31.050 }' 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.050 11:27:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.309 11:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.309 11:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.309 11:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.309 [2024-11-15 11:27:14.238281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.309 [2024-11-15 11:27:14.238574] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:31.309 [2024-11-15 11:27:14.238610] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:31.309 [2024-11-15 11:27:14.238696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.309 [2024-11-15 11:27:14.253287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:31.309 11:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.309 11:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:31.309 [2024-11-15 11:27:14.256276] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.686 "name": "raid_bdev1", 00:15:32.686 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:32.686 "strip_size_kb": 0, 00:15:32.686 "state": "online", 00:15:32.686 "raid_level": "raid1", 00:15:32.686 "superblock": true, 00:15:32.686 "num_base_bdevs": 4, 00:15:32.686 "num_base_bdevs_discovered": 3, 00:15:32.686 "num_base_bdevs_operational": 3, 00:15:32.686 "process": { 00:15:32.686 "type": "rebuild", 00:15:32.686 "target": "spare", 00:15:32.686 "progress": { 00:15:32.686 "blocks": 20480, 00:15:32.686 "percent": 32 00:15:32.686 } 00:15:32.686 }, 00:15:32.686 "base_bdevs_list": [ 00:15:32.686 { 00:15:32.686 "name": "spare", 00:15:32.686 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:32.686 "is_configured": true, 00:15:32.686 "data_offset": 2048, 00:15:32.686 "data_size": 63488 00:15:32.686 }, 00:15:32.686 { 00:15:32.686 "name": null, 00:15:32.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.686 "is_configured": false, 00:15:32.686 "data_offset": 2048, 00:15:32.686 "data_size": 63488 00:15:32.686 }, 00:15:32.686 { 00:15:32.686 "name": "BaseBdev3", 00:15:32.686 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:32.686 "is_configured": true, 00:15:32.686 "data_offset": 2048, 00:15:32.686 "data_size": 63488 00:15:32.686 }, 00:15:32.686 { 00:15:32.686 "name": "BaseBdev4", 00:15:32.686 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:32.686 "is_configured": true, 00:15:32.686 "data_offset": 2048, 00:15:32.686 "data_size": 63488 00:15:32.686 } 00:15:32.686 ] 00:15:32.686 }' 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.686 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.686 [2024-11-15 11:27:15.426857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.686 [2024-11-15 11:27:15.466700] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.687 [2024-11-15 11:27:15.466795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.687 [2024-11-15 11:27:15.466824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.687 [2024-11-15 11:27:15.466845] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.687 "name": "raid_bdev1", 00:15:32.687 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:32.687 "strip_size_kb": 0, 00:15:32.687 "state": "online", 00:15:32.687 "raid_level": "raid1", 00:15:32.687 "superblock": true, 00:15:32.687 "num_base_bdevs": 4, 00:15:32.687 "num_base_bdevs_discovered": 2, 00:15:32.687 "num_base_bdevs_operational": 2, 00:15:32.687 "base_bdevs_list": [ 00:15:32.687 { 00:15:32.687 "name": null, 00:15:32.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.687 "is_configured": false, 00:15:32.687 "data_offset": 0, 00:15:32.687 "data_size": 63488 00:15:32.687 }, 00:15:32.687 { 00:15:32.687 "name": null, 00:15:32.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.687 "is_configured": false, 00:15:32.687 "data_offset": 2048, 00:15:32.687 "data_size": 63488 00:15:32.687 }, 00:15:32.687 { 00:15:32.687 "name": "BaseBdev3", 00:15:32.687 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:32.687 "is_configured": true, 00:15:32.687 "data_offset": 2048, 00:15:32.687 "data_size": 63488 00:15:32.687 }, 00:15:32.687 { 00:15:32.687 "name": "BaseBdev4", 00:15:32.687 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:32.687 "is_configured": true, 00:15:32.687 "data_offset": 2048, 00:15:32.687 "data_size": 63488 00:15:32.687 } 00:15:32.687 ] 00:15:32.687 }' 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.687 11:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.256 11:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.256 11:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.256 11:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.256 [2024-11-15 11:27:16.013055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.256 [2024-11-15 11:27:16.013135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.256 [2024-11-15 11:27:16.013192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:33.256 [2024-11-15 11:27:16.013237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.256 [2024-11-15 11:27:16.013907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.256 [2024-11-15 11:27:16.013937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.256 [2024-11-15 11:27:16.014109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:33.256 [2024-11-15 11:27:16.014129] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:33.256 [2024-11-15 11:27:16.014154] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:33.256 [2024-11-15 11:27:16.014198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.256 [2024-11-15 11:27:16.028035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:33.256 spare 00:15:33.256 11:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.256 11:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:33.256 [2024-11-15 11:27:16.030893] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.194 "name": "raid_bdev1", 00:15:34.194 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:34.194 "strip_size_kb": 0, 00:15:34.194 "state": "online", 00:15:34.194 "raid_level": "raid1", 00:15:34.194 "superblock": true, 00:15:34.194 "num_base_bdevs": 4, 00:15:34.194 "num_base_bdevs_discovered": 3, 00:15:34.194 "num_base_bdevs_operational": 3, 00:15:34.194 "process": { 00:15:34.194 "type": "rebuild", 00:15:34.194 "target": "spare", 00:15:34.194 "progress": { 00:15:34.194 "blocks": 20480, 00:15:34.194 "percent": 32 00:15:34.194 } 00:15:34.194 }, 00:15:34.194 "base_bdevs_list": [ 00:15:34.194 { 00:15:34.194 "name": "spare", 00:15:34.194 "uuid": "b2dd1ec6-d1b5-51bb-bd30-df85637bf5a8", 00:15:34.194 "is_configured": true, 00:15:34.194 "data_offset": 2048, 00:15:34.194 "data_size": 63488 00:15:34.194 }, 00:15:34.194 { 00:15:34.194 "name": null, 00:15:34.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.194 "is_configured": false, 00:15:34.194 "data_offset": 2048, 00:15:34.194 "data_size": 63488 00:15:34.194 }, 00:15:34.194 { 00:15:34.194 "name": "BaseBdev3", 00:15:34.194 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:34.194 "is_configured": true, 00:15:34.194 "data_offset": 2048, 00:15:34.194 "data_size": 63488 00:15:34.194 }, 00:15:34.194 { 00:15:34.194 "name": "BaseBdev4", 00:15:34.194 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:34.194 "is_configured": true, 00:15:34.194 "data_offset": 2048, 00:15:34.194 "data_size": 63488 00:15:34.194 } 00:15:34.194 ] 00:15:34.194 }' 00:15:34.194 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.455 [2024-11-15 11:27:17.201645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.455 [2024-11-15 11:27:17.241533] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.455 [2024-11-15 11:27:17.241703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.455 [2024-11-15 11:27:17.241730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.455 [2024-11-15 11:27:17.241760] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.455 "name": "raid_bdev1", 00:15:34.455 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:34.455 "strip_size_kb": 0, 00:15:34.455 "state": "online", 00:15:34.455 "raid_level": "raid1", 00:15:34.455 "superblock": true, 00:15:34.455 "num_base_bdevs": 4, 00:15:34.455 "num_base_bdevs_discovered": 2, 00:15:34.455 "num_base_bdevs_operational": 2, 00:15:34.455 "base_bdevs_list": [ 00:15:34.455 { 00:15:34.455 "name": null, 00:15:34.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.455 "is_configured": false, 00:15:34.455 "data_offset": 0, 00:15:34.455 "data_size": 63488 00:15:34.455 }, 00:15:34.455 { 00:15:34.455 "name": null, 00:15:34.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.455 "is_configured": false, 00:15:34.455 "data_offset": 2048, 00:15:34.455 "data_size": 63488 00:15:34.455 }, 00:15:34.455 { 00:15:34.455 "name": "BaseBdev3", 00:15:34.455 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:34.455 "is_configured": true, 00:15:34.455 "data_offset": 2048, 00:15:34.455 "data_size": 63488 00:15:34.455 }, 00:15:34.455 { 00:15:34.455 "name": "BaseBdev4", 00:15:34.455 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:34.455 "is_configured": true, 00:15:34.455 "data_offset": 2048, 00:15:34.455 "data_size": 63488 00:15:34.455 } 00:15:34.455 ] 00:15:34.455 }' 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.455 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.023 "name": "raid_bdev1", 00:15:35.023 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:35.023 "strip_size_kb": 0, 00:15:35.023 "state": "online", 00:15:35.023 "raid_level": "raid1", 00:15:35.023 "superblock": true, 00:15:35.023 "num_base_bdevs": 4, 00:15:35.023 "num_base_bdevs_discovered": 2, 00:15:35.023 "num_base_bdevs_operational": 2, 00:15:35.023 "base_bdevs_list": [ 00:15:35.023 { 00:15:35.023 "name": null, 00:15:35.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.023 "is_configured": false, 00:15:35.023 "data_offset": 0, 00:15:35.023 "data_size": 63488 00:15:35.023 }, 00:15:35.023 { 00:15:35.023 "name": null, 00:15:35.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.023 "is_configured": false, 00:15:35.023 "data_offset": 2048, 00:15:35.023 "data_size": 63488 00:15:35.023 }, 00:15:35.023 { 00:15:35.023 "name": "BaseBdev3", 00:15:35.023 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:35.023 "is_configured": true, 00:15:35.023 "data_offset": 2048, 00:15:35.023 "data_size": 63488 00:15:35.023 }, 00:15:35.023 { 00:15:35.023 "name": "BaseBdev4", 00:15:35.023 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:35.023 "is_configured": true, 00:15:35.023 "data_offset": 2048, 00:15:35.023 "data_size": 63488 00:15:35.023 } 00:15:35.023 ] 00:15:35.023 }' 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.023 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:35.024 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.024 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.024 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.024 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:35.024 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.024 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.024 [2024-11-15 11:27:17.943944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:35.024 [2024-11-15 11:27:17.944013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.024 [2024-11-15 11:27:17.944041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:35.024 [2024-11-15 11:27:17.944057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.024 [2024-11-15 11:27:17.944761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.024 [2024-11-15 11:27:17.944797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:35.024 [2024-11-15 11:27:17.944890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:35.024 [2024-11-15 11:27:17.944914] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:35.024 [2024-11-15 11:27:17.944924] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:35.024 [2024-11-15 11:27:17.944984] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:35.024 BaseBdev1 00:15:35.024 11:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.024 11:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.402 11:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.402 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.402 "name": "raid_bdev1", 00:15:36.402 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:36.402 "strip_size_kb": 0, 00:15:36.402 "state": "online", 00:15:36.402 "raid_level": "raid1", 00:15:36.402 "superblock": true, 00:15:36.402 "num_base_bdevs": 4, 00:15:36.402 "num_base_bdevs_discovered": 2, 00:15:36.402 "num_base_bdevs_operational": 2, 00:15:36.402 "base_bdevs_list": [ 00:15:36.402 { 00:15:36.402 "name": null, 00:15:36.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.402 "is_configured": false, 00:15:36.402 "data_offset": 0, 00:15:36.402 "data_size": 63488 00:15:36.402 }, 00:15:36.402 { 00:15:36.402 "name": null, 00:15:36.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.402 "is_configured": false, 00:15:36.402 "data_offset": 2048, 00:15:36.402 "data_size": 63488 00:15:36.402 }, 00:15:36.402 { 00:15:36.402 "name": "BaseBdev3", 00:15:36.402 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:36.402 "is_configured": true, 00:15:36.402 "data_offset": 2048, 00:15:36.402 "data_size": 63488 00:15:36.402 }, 00:15:36.402 { 00:15:36.402 "name": "BaseBdev4", 00:15:36.402 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:36.402 "is_configured": true, 00:15:36.402 "data_offset": 2048, 00:15:36.402 "data_size": 63488 00:15:36.402 } 00:15:36.402 ] 00:15:36.402 }' 00:15:36.402 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.402 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.662 "name": "raid_bdev1", 00:15:36.662 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:36.662 "strip_size_kb": 0, 00:15:36.662 "state": "online", 00:15:36.662 "raid_level": "raid1", 00:15:36.662 "superblock": true, 00:15:36.662 "num_base_bdevs": 4, 00:15:36.662 "num_base_bdevs_discovered": 2, 00:15:36.662 "num_base_bdevs_operational": 2, 00:15:36.662 "base_bdevs_list": [ 00:15:36.662 { 00:15:36.662 "name": null, 00:15:36.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.662 "is_configured": false, 00:15:36.662 "data_offset": 0, 00:15:36.662 "data_size": 63488 00:15:36.662 }, 00:15:36.662 { 00:15:36.662 "name": null, 00:15:36.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.662 "is_configured": false, 00:15:36.662 "data_offset": 2048, 00:15:36.662 "data_size": 63488 00:15:36.662 }, 00:15:36.662 { 00:15:36.662 "name": "BaseBdev3", 00:15:36.662 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:36.662 "is_configured": true, 00:15:36.662 "data_offset": 2048, 00:15:36.662 "data_size": 63488 00:15:36.662 }, 00:15:36.662 { 00:15:36.662 "name": "BaseBdev4", 00:15:36.662 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:36.662 "is_configured": true, 00:15:36.662 "data_offset": 2048, 00:15:36.662 "data_size": 63488 00:15:36.662 } 00:15:36.662 ] 00:15:36.662 }' 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.662 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.921 [2024-11-15 11:27:19.668527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.921 [2024-11-15 11:27:19.668907] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:36.921 [2024-11-15 11:27:19.668936] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:36.921 request: 00:15:36.921 { 00:15:36.921 "base_bdev": "BaseBdev1", 00:15:36.921 "raid_bdev": "raid_bdev1", 00:15:36.921 "method": "bdev_raid_add_base_bdev", 00:15:36.921 "req_id": 1 00:15:36.921 } 00:15:36.921 Got JSON-RPC error response 00:15:36.921 response: 00:15:36.921 { 00:15:36.921 "code": -22, 00:15:36.921 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:36.921 } 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:36.921 11:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.858 "name": "raid_bdev1", 00:15:37.858 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:37.858 "strip_size_kb": 0, 00:15:37.858 "state": "online", 00:15:37.858 "raid_level": "raid1", 00:15:37.858 "superblock": true, 00:15:37.858 "num_base_bdevs": 4, 00:15:37.858 "num_base_bdevs_discovered": 2, 00:15:37.858 "num_base_bdevs_operational": 2, 00:15:37.858 "base_bdevs_list": [ 00:15:37.858 { 00:15:37.858 "name": null, 00:15:37.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.858 "is_configured": false, 00:15:37.858 "data_offset": 0, 00:15:37.858 "data_size": 63488 00:15:37.858 }, 00:15:37.858 { 00:15:37.858 "name": null, 00:15:37.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.858 "is_configured": false, 00:15:37.858 "data_offset": 2048, 00:15:37.858 "data_size": 63488 00:15:37.858 }, 00:15:37.858 { 00:15:37.858 "name": "BaseBdev3", 00:15:37.858 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:37.858 "is_configured": true, 00:15:37.858 "data_offset": 2048, 00:15:37.858 "data_size": 63488 00:15:37.858 }, 00:15:37.858 { 00:15:37.858 "name": "BaseBdev4", 00:15:37.858 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:37.858 "is_configured": true, 00:15:37.858 "data_offset": 2048, 00:15:37.858 "data_size": 63488 00:15:37.858 } 00:15:37.858 ] 00:15:37.858 }' 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.858 11:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.425 "name": "raid_bdev1", 00:15:38.425 "uuid": "c2070689-907c-4989-ac50-1598d6e53c01", 00:15:38.425 "strip_size_kb": 0, 00:15:38.425 "state": "online", 00:15:38.425 "raid_level": "raid1", 00:15:38.425 "superblock": true, 00:15:38.425 "num_base_bdevs": 4, 00:15:38.425 "num_base_bdevs_discovered": 2, 00:15:38.425 "num_base_bdevs_operational": 2, 00:15:38.425 "base_bdevs_list": [ 00:15:38.425 { 00:15:38.425 "name": null, 00:15:38.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.425 "is_configured": false, 00:15:38.425 "data_offset": 0, 00:15:38.425 "data_size": 63488 00:15:38.425 }, 00:15:38.425 { 00:15:38.425 "name": null, 00:15:38.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.425 "is_configured": false, 00:15:38.425 "data_offset": 2048, 00:15:38.425 "data_size": 63488 00:15:38.425 }, 00:15:38.425 { 00:15:38.425 "name": "BaseBdev3", 00:15:38.425 "uuid": "5e3aa6c4-89ce-5860-b878-14c844eadf3c", 00:15:38.425 "is_configured": true, 00:15:38.425 "data_offset": 2048, 00:15:38.425 "data_size": 63488 00:15:38.425 }, 00:15:38.425 { 00:15:38.425 "name": "BaseBdev4", 00:15:38.425 "uuid": "9a0708a4-72f9-5e23-997c-63a20e3860bd", 00:15:38.425 "is_configured": true, 00:15:38.425 "data_offset": 2048, 00:15:38.425 "data_size": 63488 00:15:38.425 } 00:15:38.425 ] 00:15:38.425 }' 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78147 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78147 ']' 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78147 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:38.425 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:38.684 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78147 00:15:38.684 killing process with pid 78147 00:15:38.684 Received shutdown signal, test time was about 60.000000 seconds 00:15:38.684 00:15:38.684 Latency(us) 00:15:38.684 [2024-11-15T11:27:21.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.685 [2024-11-15T11:27:21.635Z] =================================================================================================================== 00:15:38.685 [2024-11-15T11:27:21.635Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:38.685 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:38.685 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:38.685 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78147' 00:15:38.685 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78147 00:15:38.685 [2024-11-15 11:27:21.400241] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.685 11:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78147 00:15:38.685 [2024-11-15 11:27:21.400433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.685 [2024-11-15 11:27:21.400535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.685 [2024-11-15 11:27:21.400582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:38.944 [2024-11-15 11:27:21.816530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:40.321 00:15:40.321 real 0m29.465s 00:15:40.321 user 0m35.569s 00:15:40.321 sys 0m4.469s 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.321 ************************************ 00:15:40.321 END TEST raid_rebuild_test_sb 00:15:40.321 ************************************ 00:15:40.321 11:27:22 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:40.321 11:27:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:40.321 11:27:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.321 11:27:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.321 ************************************ 00:15:40.321 START TEST raid_rebuild_test_io 00:15:40.321 ************************************ 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.321 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78940 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78940 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 78940 ']' 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:40.322 11:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.322 [2024-11-15 11:27:23.103222] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:40.322 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:40.322 Zero copy mechanism will not be used. 00:15:40.322 [2024-11-15 11:27:23.103578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78940 ] 00:15:40.581 [2024-11-15 11:27:23.289718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.581 [2024-11-15 11:27:23.427329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.840 [2024-11-15 11:27:23.647648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.840 [2024-11-15 11:27:23.647727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.408 BaseBdev1_malloc 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.408 [2024-11-15 11:27:24.126287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:41.408 [2024-11-15 11:27:24.126386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.408 [2024-11-15 11:27:24.126435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:41.408 [2024-11-15 11:27:24.126476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.408 [2024-11-15 11:27:24.129711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.408 [2024-11-15 11:27:24.129929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:41.408 BaseBdev1 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:41.408 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.409 BaseBdev2_malloc 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.409 [2024-11-15 11:27:24.185841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:41.409 [2024-11-15 11:27:24.185948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.409 [2024-11-15 11:27:24.185983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:41.409 [2024-11-15 11:27:24.186003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.409 [2024-11-15 11:27:24.189195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.409 [2024-11-15 11:27:24.189317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:41.409 BaseBdev2 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.409 BaseBdev3_malloc 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.409 [2024-11-15 11:27:24.255856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:41.409 [2024-11-15 11:27:24.255942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.409 [2024-11-15 11:27:24.255977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:41.409 [2024-11-15 11:27:24.255997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.409 [2024-11-15 11:27:24.259061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.409 [2024-11-15 11:27:24.259292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:41.409 BaseBdev3 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.409 BaseBdev4_malloc 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.409 [2024-11-15 11:27:24.314868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:41.409 [2024-11-15 11:27:24.315098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.409 [2024-11-15 11:27:24.315137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:41.409 [2024-11-15 11:27:24.315158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.409 [2024-11-15 11:27:24.318016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.409 [2024-11-15 11:27:24.318117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:41.409 BaseBdev4 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.409 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 spare_malloc 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 spare_delay 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 [2024-11-15 11:27:24.382090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.668 [2024-11-15 11:27:24.382162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.668 [2024-11-15 11:27:24.382204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:41.668 [2024-11-15 11:27:24.382225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.668 [2024-11-15 11:27:24.385293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.668 [2024-11-15 11:27:24.385354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.668 spare 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 [2024-11-15 11:27:24.394248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.668 [2024-11-15 11:27:24.396949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.668 [2024-11-15 11:27:24.397171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.668 [2024-11-15 11:27:24.397332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.668 [2024-11-15 11:27:24.397499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:41.668 [2024-11-15 11:27:24.397611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:41.668 [2024-11-15 11:27:24.398095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:41.668 [2024-11-15 11:27:24.398458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:41.668 [2024-11-15 11:27:24.398634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:41.668 [2024-11-15 11:27:24.399086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.668 "name": "raid_bdev1", 00:15:41.668 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:41.668 "strip_size_kb": 0, 00:15:41.668 "state": "online", 00:15:41.668 "raid_level": "raid1", 00:15:41.668 "superblock": false, 00:15:41.668 "num_base_bdevs": 4, 00:15:41.668 "num_base_bdevs_discovered": 4, 00:15:41.668 "num_base_bdevs_operational": 4, 00:15:41.668 "base_bdevs_list": [ 00:15:41.668 { 00:15:41.668 "name": "BaseBdev1", 00:15:41.668 "uuid": "6d5561fc-1e29-5227-8ba2-71a6f2fa31e7", 00:15:41.668 "is_configured": true, 00:15:41.668 "data_offset": 0, 00:15:41.668 "data_size": 65536 00:15:41.668 }, 00:15:41.668 { 00:15:41.668 "name": "BaseBdev2", 00:15:41.668 "uuid": "a944e258-ba9b-5dba-86ce-e19e6db31754", 00:15:41.668 "is_configured": true, 00:15:41.668 "data_offset": 0, 00:15:41.668 "data_size": 65536 00:15:41.668 }, 00:15:41.668 { 00:15:41.668 "name": "BaseBdev3", 00:15:41.668 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:41.668 "is_configured": true, 00:15:41.668 "data_offset": 0, 00:15:41.668 "data_size": 65536 00:15:41.668 }, 00:15:41.668 { 00:15:41.668 "name": "BaseBdev4", 00:15:41.668 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:41.668 "is_configured": true, 00:15:41.668 "data_offset": 0, 00:15:41.668 "data_size": 65536 00:15:41.668 } 00:15:41.668 ] 00:15:41.668 }' 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.668 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.236 [2024-11-15 11:27:24.927719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.236 11:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.237 [2024-11-15 11:27:25.035145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.237 "name": "raid_bdev1", 00:15:42.237 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:42.237 "strip_size_kb": 0, 00:15:42.237 "state": "online", 00:15:42.237 "raid_level": "raid1", 00:15:42.237 "superblock": false, 00:15:42.237 "num_base_bdevs": 4, 00:15:42.237 "num_base_bdevs_discovered": 3, 00:15:42.237 "num_base_bdevs_operational": 3, 00:15:42.237 "base_bdevs_list": [ 00:15:42.237 { 00:15:42.237 "name": null, 00:15:42.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.237 "is_configured": false, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 65536 00:15:42.237 }, 00:15:42.237 { 00:15:42.237 "name": "BaseBdev2", 00:15:42.237 "uuid": "a944e258-ba9b-5dba-86ce-e19e6db31754", 00:15:42.237 "is_configured": true, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 65536 00:15:42.237 }, 00:15:42.237 { 00:15:42.237 "name": "BaseBdev3", 00:15:42.237 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:42.237 "is_configured": true, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 65536 00:15:42.237 }, 00:15:42.237 { 00:15:42.237 "name": "BaseBdev4", 00:15:42.237 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:42.237 "is_configured": true, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 65536 00:15:42.237 } 00:15:42.237 ] 00:15:42.237 }' 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.237 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.237 [2024-11-15 11:27:25.168369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:42.237 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:42.237 Zero copy mechanism will not be used. 00:15:42.237 Running I/O for 60 seconds... 00:15:42.805 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.805 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.805 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.805 [2024-11-15 11:27:25.583709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.805 11:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.805 11:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:42.805 [2024-11-15 11:27:25.639641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:42.805 [2024-11-15 11:27:25.642482] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.064 [2024-11-15 11:27:25.763384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:43.064 [2024-11-15 11:27:25.765756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:43.064 [2024-11-15 11:27:25.990710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:43.064 [2024-11-15 11:27:25.992426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:43.582 152.00 IOPS, 456.00 MiB/s [2024-11-15T11:27:26.532Z] [2024-11-15 11:27:26.491615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:43.582 [2024-11-15 11:27:26.492430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.841 "name": "raid_bdev1", 00:15:43.841 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:43.841 "strip_size_kb": 0, 00:15:43.841 "state": "online", 00:15:43.841 "raid_level": "raid1", 00:15:43.841 "superblock": false, 00:15:43.841 "num_base_bdevs": 4, 00:15:43.841 "num_base_bdevs_discovered": 4, 00:15:43.841 "num_base_bdevs_operational": 4, 00:15:43.841 "process": { 00:15:43.841 "type": "rebuild", 00:15:43.841 "target": "spare", 00:15:43.841 "progress": { 00:15:43.841 "blocks": 10240, 00:15:43.841 "percent": 15 00:15:43.841 } 00:15:43.841 }, 00:15:43.841 "base_bdevs_list": [ 00:15:43.841 { 00:15:43.841 "name": "spare", 00:15:43.841 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:43.841 "is_configured": true, 00:15:43.841 "data_offset": 0, 00:15:43.841 "data_size": 65536 00:15:43.841 }, 00:15:43.841 { 00:15:43.841 "name": "BaseBdev2", 00:15:43.841 "uuid": "a944e258-ba9b-5dba-86ce-e19e6db31754", 00:15:43.841 "is_configured": true, 00:15:43.841 "data_offset": 0, 00:15:43.841 "data_size": 65536 00:15:43.841 }, 00:15:43.841 { 00:15:43.841 "name": "BaseBdev3", 00:15:43.841 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:43.841 "is_configured": true, 00:15:43.841 "data_offset": 0, 00:15:43.841 "data_size": 65536 00:15:43.841 }, 00:15:43.841 { 00:15:43.841 "name": "BaseBdev4", 00:15:43.841 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:43.841 "is_configured": true, 00:15:43.841 "data_offset": 0, 00:15:43.841 "data_size": 65536 00:15:43.841 } 00:15:43.841 ] 00:15:43.841 }' 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.841 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.841 [2024-11-15 11:27:26.770997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:43.841 [2024-11-15 11:27:26.773384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:44.100 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.100 11:27:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:44.100 11:27:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.100 11:27:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.100 [2024-11-15 11:27:26.812261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.100 [2024-11-15 11:27:26.992985] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:44.100 [2024-11-15 11:27:27.008988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.100 [2024-11-15 11:27:27.009055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.100 [2024-11-15 11:27:27.009086] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:44.100 [2024-11-15 11:27:27.031963] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.360 "name": "raid_bdev1", 00:15:44.360 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:44.360 "strip_size_kb": 0, 00:15:44.360 "state": "online", 00:15:44.360 "raid_level": "raid1", 00:15:44.360 "superblock": false, 00:15:44.360 "num_base_bdevs": 4, 00:15:44.360 "num_base_bdevs_discovered": 3, 00:15:44.360 "num_base_bdevs_operational": 3, 00:15:44.360 "base_bdevs_list": [ 00:15:44.360 { 00:15:44.360 "name": null, 00:15:44.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.360 "is_configured": false, 00:15:44.360 "data_offset": 0, 00:15:44.360 "data_size": 65536 00:15:44.360 }, 00:15:44.360 { 00:15:44.360 "name": "BaseBdev2", 00:15:44.360 "uuid": "a944e258-ba9b-5dba-86ce-e19e6db31754", 00:15:44.360 "is_configured": true, 00:15:44.360 "data_offset": 0, 00:15:44.360 "data_size": 65536 00:15:44.360 }, 00:15:44.360 { 00:15:44.360 "name": "BaseBdev3", 00:15:44.360 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:44.360 "is_configured": true, 00:15:44.360 "data_offset": 0, 00:15:44.360 "data_size": 65536 00:15:44.360 }, 00:15:44.360 { 00:15:44.360 "name": "BaseBdev4", 00:15:44.360 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:44.360 "is_configured": true, 00:15:44.360 "data_offset": 0, 00:15:44.360 "data_size": 65536 00:15:44.360 } 00:15:44.360 ] 00:15:44.360 }' 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.360 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.928 121.50 IOPS, 364.50 MiB/s [2024-11-15T11:27:27.878Z] 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.928 "name": "raid_bdev1", 00:15:44.928 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:44.928 "strip_size_kb": 0, 00:15:44.928 "state": "online", 00:15:44.928 "raid_level": "raid1", 00:15:44.928 "superblock": false, 00:15:44.928 "num_base_bdevs": 4, 00:15:44.928 "num_base_bdevs_discovered": 3, 00:15:44.928 "num_base_bdevs_operational": 3, 00:15:44.928 "base_bdevs_list": [ 00:15:44.928 { 00:15:44.928 "name": null, 00:15:44.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.928 "is_configured": false, 00:15:44.928 "data_offset": 0, 00:15:44.928 "data_size": 65536 00:15:44.928 }, 00:15:44.928 { 00:15:44.928 "name": "BaseBdev2", 00:15:44.928 "uuid": "a944e258-ba9b-5dba-86ce-e19e6db31754", 00:15:44.928 "is_configured": true, 00:15:44.928 "data_offset": 0, 00:15:44.928 "data_size": 65536 00:15:44.928 }, 00:15:44.928 { 00:15:44.928 "name": "BaseBdev3", 00:15:44.928 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:44.928 "is_configured": true, 00:15:44.928 "data_offset": 0, 00:15:44.928 "data_size": 65536 00:15:44.928 }, 00:15:44.928 { 00:15:44.928 "name": "BaseBdev4", 00:15:44.928 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:44.928 "is_configured": true, 00:15:44.928 "data_offset": 0, 00:15:44.928 "data_size": 65536 00:15:44.928 } 00:15:44.928 ] 00:15:44.928 }' 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.928 [2024-11-15 11:27:27.745103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.928 11:27:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:44.928 [2024-11-15 11:27:27.821395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:44.928 [2024-11-15 11:27:27.824146] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.187 [2024-11-15 11:27:27.943310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:45.187 [2024-11-15 11:27:27.944102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:45.187 [2024-11-15 11:27:28.125003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:45.704 135.00 IOPS, 405.00 MiB/s [2024-11-15T11:27:28.654Z] [2024-11-15 11:27:28.475299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:45.962 [2024-11-15 11:27:28.698603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:45.962 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.962 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.962 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.962 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.963 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.963 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.963 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.963 11:27:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.963 11:27:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.963 11:27:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.963 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.963 "name": "raid_bdev1", 00:15:45.963 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:45.963 "strip_size_kb": 0, 00:15:45.963 "state": "online", 00:15:45.963 "raid_level": "raid1", 00:15:45.963 "superblock": false, 00:15:45.963 "num_base_bdevs": 4, 00:15:45.963 "num_base_bdevs_discovered": 4, 00:15:45.963 "num_base_bdevs_operational": 4, 00:15:45.963 "process": { 00:15:45.963 "type": "rebuild", 00:15:45.963 "target": "spare", 00:15:45.963 "progress": { 00:15:45.963 "blocks": 10240, 00:15:45.963 "percent": 15 00:15:45.963 } 00:15:45.963 }, 00:15:45.963 "base_bdevs_list": [ 00:15:45.963 { 00:15:45.963 "name": "spare", 00:15:45.963 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:45.963 "is_configured": true, 00:15:45.963 "data_offset": 0, 00:15:45.963 "data_size": 65536 00:15:45.963 }, 00:15:45.963 { 00:15:45.963 "name": "BaseBdev2", 00:15:45.963 "uuid": "a944e258-ba9b-5dba-86ce-e19e6db31754", 00:15:45.963 "is_configured": true, 00:15:45.963 "data_offset": 0, 00:15:45.963 "data_size": 65536 00:15:45.963 }, 00:15:45.963 { 00:15:45.963 "name": "BaseBdev3", 00:15:45.963 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:45.963 "is_configured": true, 00:15:45.963 "data_offset": 0, 00:15:45.963 "data_size": 65536 00:15:45.963 }, 00:15:45.963 { 00:15:45.963 "name": "BaseBdev4", 00:15:45.963 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:45.963 "is_configured": true, 00:15:45.963 "data_offset": 0, 00:15:45.963 "data_size": 65536 00:15:45.963 } 00:15:45.963 ] 00:15:45.963 }' 00:15:45.963 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.221 11:27:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.221 [2024-11-15 11:27:28.974116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.221 [2024-11-15 11:27:28.974293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:46.221 [2024-11-15 11:27:28.976884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:46.221 [2024-11-15 11:27:29.087984] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:46.221 [2024-11-15 11:27:29.088296] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.221 11:27:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.222 11:27:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.222 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.222 "name": "raid_bdev1", 00:15:46.222 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:46.222 "strip_size_kb": 0, 00:15:46.222 "state": "online", 00:15:46.222 "raid_level": "raid1", 00:15:46.222 "superblock": false, 00:15:46.222 "num_base_bdevs": 4, 00:15:46.222 "num_base_bdevs_discovered": 3, 00:15:46.222 "num_base_bdevs_operational": 3, 00:15:46.222 "process": { 00:15:46.222 "type": "rebuild", 00:15:46.222 "target": "spare", 00:15:46.222 "progress": { 00:15:46.222 "blocks": 14336, 00:15:46.222 "percent": 21 00:15:46.222 } 00:15:46.222 }, 00:15:46.222 "base_bdevs_list": [ 00:15:46.222 { 00:15:46.222 "name": "spare", 00:15:46.222 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:46.222 "is_configured": true, 00:15:46.222 "data_offset": 0, 00:15:46.222 "data_size": 65536 00:15:46.222 }, 00:15:46.222 { 00:15:46.222 "name": null, 00:15:46.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.222 "is_configured": false, 00:15:46.222 "data_offset": 0, 00:15:46.222 "data_size": 65536 00:15:46.222 }, 00:15:46.222 { 00:15:46.222 "name": "BaseBdev3", 00:15:46.222 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:46.222 "is_configured": true, 00:15:46.222 "data_offset": 0, 00:15:46.222 "data_size": 65536 00:15:46.222 }, 00:15:46.222 { 00:15:46.222 "name": "BaseBdev4", 00:15:46.222 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:46.222 "is_configured": true, 00:15:46.222 "data_offset": 0, 00:15:46.222 "data_size": 65536 00:15:46.222 } 00:15:46.222 ] 00:15:46.222 }' 00:15:46.222 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.481 122.50 IOPS, 367.50 MiB/s [2024-11-15T11:27:29.431Z] 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.481 [2024-11-15 11:27:29.222439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:46.481 [2024-11-15 11:27:29.222801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=526 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.481 "name": "raid_bdev1", 00:15:46.481 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:46.481 "strip_size_kb": 0, 00:15:46.481 "state": "online", 00:15:46.481 "raid_level": "raid1", 00:15:46.481 "superblock": false, 00:15:46.481 "num_base_bdevs": 4, 00:15:46.481 "num_base_bdevs_discovered": 3, 00:15:46.481 "num_base_bdevs_operational": 3, 00:15:46.481 "process": { 00:15:46.481 "type": "rebuild", 00:15:46.481 "target": "spare", 00:15:46.481 "progress": { 00:15:46.481 "blocks": 16384, 00:15:46.481 "percent": 25 00:15:46.481 } 00:15:46.481 }, 00:15:46.481 "base_bdevs_list": [ 00:15:46.481 { 00:15:46.481 "name": "spare", 00:15:46.481 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:46.481 "is_configured": true, 00:15:46.481 "data_offset": 0, 00:15:46.481 "data_size": 65536 00:15:46.481 }, 00:15:46.481 { 00:15:46.481 "name": null, 00:15:46.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.481 "is_configured": false, 00:15:46.481 "data_offset": 0, 00:15:46.481 "data_size": 65536 00:15:46.481 }, 00:15:46.481 { 00:15:46.481 "name": "BaseBdev3", 00:15:46.481 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:46.481 "is_configured": true, 00:15:46.481 "data_offset": 0, 00:15:46.481 "data_size": 65536 00:15:46.481 }, 00:15:46.481 { 00:15:46.481 "name": "BaseBdev4", 00:15:46.481 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:46.481 "is_configured": true, 00:15:46.481 "data_offset": 0, 00:15:46.481 "data_size": 65536 00:15:46.481 } 00:15:46.481 ] 00:15:46.481 }' 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.481 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.740 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.740 11:27:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.740 [2024-11-15 11:27:29.503515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:47.308 [2024-11-15 11:27:29.982953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:47.308 116.00 IOPS, 348.00 MiB/s [2024-11-15T11:27:30.258Z] [2024-11-15 11:27:30.217135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.567 "name": "raid_bdev1", 00:15:47.567 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:47.567 "strip_size_kb": 0, 00:15:47.567 "state": "online", 00:15:47.567 "raid_level": "raid1", 00:15:47.567 "superblock": false, 00:15:47.567 "num_base_bdevs": 4, 00:15:47.567 "num_base_bdevs_discovered": 3, 00:15:47.567 "num_base_bdevs_operational": 3, 00:15:47.567 "process": { 00:15:47.567 "type": "rebuild", 00:15:47.567 "target": "spare", 00:15:47.567 "progress": { 00:15:47.567 "blocks": 30720, 00:15:47.567 "percent": 46 00:15:47.567 } 00:15:47.567 }, 00:15:47.567 "base_bdevs_list": [ 00:15:47.567 { 00:15:47.567 "name": "spare", 00:15:47.567 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:47.567 "is_configured": true, 00:15:47.567 "data_offset": 0, 00:15:47.567 "data_size": 65536 00:15:47.567 }, 00:15:47.567 { 00:15:47.567 "name": null, 00:15:47.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.567 "is_configured": false, 00:15:47.567 "data_offset": 0, 00:15:47.567 "data_size": 65536 00:15:47.567 }, 00:15:47.567 { 00:15:47.567 "name": "BaseBdev3", 00:15:47.567 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:47.567 "is_configured": true, 00:15:47.567 "data_offset": 0, 00:15:47.567 "data_size": 65536 00:15:47.567 }, 00:15:47.567 { 00:15:47.567 "name": "BaseBdev4", 00:15:47.567 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:47.567 "is_configured": true, 00:15:47.567 "data_offset": 0, 00:15:47.567 "data_size": 65536 00:15:47.567 } 00:15:47.567 ] 00:15:47.567 }' 00:15:47.567 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.825 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.825 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.825 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.826 11:27:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.084 [2024-11-15 11:27:31.025665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:48.601 109.00 IOPS, 327.00 MiB/s [2024-11-15T11:27:31.551Z] [2024-11-15 11:27:31.387658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:48.860 [2024-11-15 11:27:31.612504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:48.860 [2024-11-15 11:27:31.613995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.860 "name": "raid_bdev1", 00:15:48.860 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:48.860 "strip_size_kb": 0, 00:15:48.860 "state": "online", 00:15:48.860 "raid_level": "raid1", 00:15:48.860 "superblock": false, 00:15:48.860 "num_base_bdevs": 4, 00:15:48.860 "num_base_bdevs_discovered": 3, 00:15:48.860 "num_base_bdevs_operational": 3, 00:15:48.860 "process": { 00:15:48.860 "type": "rebuild", 00:15:48.860 "target": "spare", 00:15:48.860 "progress": { 00:15:48.860 "blocks": 51200, 00:15:48.860 "percent": 78 00:15:48.860 } 00:15:48.860 }, 00:15:48.860 "base_bdevs_list": [ 00:15:48.860 { 00:15:48.860 "name": "spare", 00:15:48.860 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:48.860 "is_configured": true, 00:15:48.860 "data_offset": 0, 00:15:48.860 "data_size": 65536 00:15:48.860 }, 00:15:48.860 { 00:15:48.860 "name": null, 00:15:48.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.860 "is_configured": false, 00:15:48.860 "data_offset": 0, 00:15:48.860 "data_size": 65536 00:15:48.860 }, 00:15:48.860 { 00:15:48.860 "name": "BaseBdev3", 00:15:48.860 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:48.860 "is_configured": true, 00:15:48.860 "data_offset": 0, 00:15:48.860 "data_size": 65536 00:15:48.860 }, 00:15:48.860 { 00:15:48.860 "name": "BaseBdev4", 00:15:48.860 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:48.860 "is_configured": true, 00:15:48.860 "data_offset": 0, 00:15:48.860 "data_size": 65536 00:15:48.860 } 00:15:48.860 ] 00:15:48.860 }' 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.860 11:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.428 97.00 IOPS, 291.00 MiB/s [2024-11-15T11:27:32.378Z] [2024-11-15 11:27:32.199150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:50.016 [2024-11-15 11:27:32.650481] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:50.016 [2024-11-15 11:27:32.757500] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:50.016 [2024-11-15 11:27:32.761739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.016 "name": "raid_bdev1", 00:15:50.016 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:50.016 "strip_size_kb": 0, 00:15:50.016 "state": "online", 00:15:50.016 "raid_level": "raid1", 00:15:50.016 "superblock": false, 00:15:50.016 "num_base_bdevs": 4, 00:15:50.016 "num_base_bdevs_discovered": 3, 00:15:50.016 "num_base_bdevs_operational": 3, 00:15:50.016 "base_bdevs_list": [ 00:15:50.016 { 00:15:50.016 "name": "spare", 00:15:50.016 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:50.016 "is_configured": true, 00:15:50.016 "data_offset": 0, 00:15:50.016 "data_size": 65536 00:15:50.016 }, 00:15:50.016 { 00:15:50.016 "name": null, 00:15:50.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.016 "is_configured": false, 00:15:50.016 "data_offset": 0, 00:15:50.016 "data_size": 65536 00:15:50.016 }, 00:15:50.016 { 00:15:50.016 "name": "BaseBdev3", 00:15:50.016 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:50.016 "is_configured": true, 00:15:50.016 "data_offset": 0, 00:15:50.016 "data_size": 65536 00:15:50.016 }, 00:15:50.016 { 00:15:50.016 "name": "BaseBdev4", 00:15:50.016 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:50.016 "is_configured": true, 00:15:50.016 "data_offset": 0, 00:15:50.016 "data_size": 65536 00:15:50.016 } 00:15:50.016 ] 00:15:50.016 }' 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.016 11:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.276 11:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.276 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.276 "name": "raid_bdev1", 00:15:50.276 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:50.276 "strip_size_kb": 0, 00:15:50.276 "state": "online", 00:15:50.276 "raid_level": "raid1", 00:15:50.276 "superblock": false, 00:15:50.276 "num_base_bdevs": 4, 00:15:50.276 "num_base_bdevs_discovered": 3, 00:15:50.276 "num_base_bdevs_operational": 3, 00:15:50.276 "base_bdevs_list": [ 00:15:50.276 { 00:15:50.276 "name": "spare", 00:15:50.276 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:50.276 "is_configured": true, 00:15:50.276 "data_offset": 0, 00:15:50.276 "data_size": 65536 00:15:50.276 }, 00:15:50.276 { 00:15:50.276 "name": null, 00:15:50.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.276 "is_configured": false, 00:15:50.276 "data_offset": 0, 00:15:50.276 "data_size": 65536 00:15:50.276 }, 00:15:50.276 { 00:15:50.276 "name": "BaseBdev3", 00:15:50.276 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:50.276 "is_configured": true, 00:15:50.276 "data_offset": 0, 00:15:50.276 "data_size": 65536 00:15:50.276 }, 00:15:50.276 { 00:15:50.276 "name": "BaseBdev4", 00:15:50.276 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:50.276 "is_configured": true, 00:15:50.276 "data_offset": 0, 00:15:50.276 "data_size": 65536 00:15:50.276 } 00:15:50.276 ] 00:15:50.276 }' 00:15:50.276 11:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.276 "name": "raid_bdev1", 00:15:50.276 "uuid": "957ba049-8dad-4bb8-8a38-883c3d7a35a0", 00:15:50.276 "strip_size_kb": 0, 00:15:50.276 "state": "online", 00:15:50.276 "raid_level": "raid1", 00:15:50.276 "superblock": false, 00:15:50.276 "num_base_bdevs": 4, 00:15:50.276 "num_base_bdevs_discovered": 3, 00:15:50.276 "num_base_bdevs_operational": 3, 00:15:50.276 "base_bdevs_list": [ 00:15:50.276 { 00:15:50.276 "name": "spare", 00:15:50.276 "uuid": "80b34908-adb7-5500-b40f-f779c598b238", 00:15:50.276 "is_configured": true, 00:15:50.276 "data_offset": 0, 00:15:50.276 "data_size": 65536 00:15:50.276 }, 00:15:50.276 { 00:15:50.276 "name": null, 00:15:50.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.276 "is_configured": false, 00:15:50.276 "data_offset": 0, 00:15:50.276 "data_size": 65536 00:15:50.276 }, 00:15:50.276 { 00:15:50.276 "name": "BaseBdev3", 00:15:50.276 "uuid": "6e94907d-0188-5ea9-913d-11ae6507825a", 00:15:50.276 "is_configured": true, 00:15:50.276 "data_offset": 0, 00:15:50.276 "data_size": 65536 00:15:50.276 }, 00:15:50.276 { 00:15:50.276 "name": "BaseBdev4", 00:15:50.276 "uuid": "31807713-63d8-5a1e-bc89-c70e7b3f0ed9", 00:15:50.276 "is_configured": true, 00:15:50.276 "data_offset": 0, 00:15:50.276 "data_size": 65536 00:15:50.276 } 00:15:50.276 ] 00:15:50.276 }' 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.276 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.845 89.12 IOPS, 267.38 MiB/s [2024-11-15T11:27:33.795Z] 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.845 [2024-11-15 11:27:33.632290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.845 [2024-11-15 11:27:33.632331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.845 00:15:50.845 Latency(us) 00:15:50.845 [2024-11-15T11:27:33.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.845 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:50.845 raid_bdev1 : 8.50 85.33 255.98 0.00 0.00 16454.60 268.10 120109.61 00:15:50.845 [2024-11-15T11:27:33.795Z] =================================================================================================================== 00:15:50.845 [2024-11-15T11:27:33.795Z] Total : 85.33 255.98 0.00 0.00 16454.60 268.10 120109.61 00:15:50.845 [2024-11-15 11:27:33.686780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.845 [2024-11-15 11:27:33.686863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.845 [2024-11-15 11:27:33.686985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.845 [2024-11-15 11:27:33.687000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.845 { 00:15:50.845 "results": [ 00:15:50.845 { 00:15:50.845 "job": "raid_bdev1", 00:15:50.845 "core_mask": "0x1", 00:15:50.845 "workload": "randrw", 00:15:50.845 "percentage": 50, 00:15:50.845 "status": "finished", 00:15:50.845 "queue_depth": 2, 00:15:50.845 "io_size": 3145728, 00:15:50.845 "runtime": 8.496642, 00:15:50.845 "iops": 85.3278271580702, 00:15:50.845 "mibps": 255.98348147421063, 00:15:50.845 "io_failed": 0, 00:15:50.845 "io_timeout": 0, 00:15:50.845 "avg_latency_us": 16454.600145454544, 00:15:50.845 "min_latency_us": 268.1018181818182, 00:15:50.845 "max_latency_us": 120109.61454545455 00:15:50.845 } 00:15:50.845 ], 00:15:50.845 "core_count": 1 00:15:50.845 } 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.845 11:27:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:51.104 /dev/nbd0 00:15:51.104 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.363 1+0 records in 00:15:51.363 1+0 records out 00:15:51.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844503 s, 4.9 MB/s 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.363 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:51.623 /dev/nbd1 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.623 1+0 records in 00:15:51.623 1+0 records out 00:15:51.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659483 s, 6.2 MB/s 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.623 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:51.882 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:51.882 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.882 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:51.882 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.882 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:51.882 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.882 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:52.141 11:27:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:52.400 /dev/nbd1 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:52.400 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.401 1+0 records in 00:15:52.401 1+0 records out 00:15:52.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491176 s, 8.3 MB/s 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:52.401 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:52.660 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:52.660 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.660 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:52.660 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.660 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:52.660 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.660 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.919 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78940 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 78940 ']' 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 78940 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:53.179 11:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78940 00:15:53.179 killing process with pid 78940 00:15:53.179 Received shutdown signal, test time was about 10.842044 seconds 00:15:53.179 00:15:53.179 Latency(us) 00:15:53.179 [2024-11-15T11:27:36.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.179 [2024-11-15T11:27:36.129Z] =================================================================================================================== 00:15:53.179 [2024-11-15T11:27:36.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.179 11:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:53.179 11:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:53.179 11:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78940' 00:15:53.179 11:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 78940 00:15:53.179 11:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 78940 00:15:53.179 [2024-11-15 11:27:36.013633] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.748 [2024-11-15 11:27:36.398433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:54.685 00:15:54.685 real 0m14.538s 00:15:54.685 user 0m19.076s 00:15:54.685 sys 0m1.923s 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.685 ************************************ 00:15:54.685 END TEST raid_rebuild_test_io 00:15:54.685 ************************************ 00:15:54.685 11:27:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:54.685 11:27:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:54.685 11:27:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:54.685 11:27:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.685 ************************************ 00:15:54.685 START TEST raid_rebuild_test_sb_io 00:15:54.685 ************************************ 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79360 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79360 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79360 ']' 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:54.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.685 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.686 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:54.686 11:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.945 [2024-11-15 11:27:37.700640] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:54.945 [2024-11-15 11:27:37.700842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79360 ] 00:15:54.945 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.945 Zero copy mechanism will not be used. 00:15:54.945 [2024-11-15 11:27:37.881789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.204 [2024-11-15 11:27:38.020382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.463 [2024-11-15 11:27:38.235550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.463 [2024-11-15 11:27:38.235605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.722 BaseBdev1_malloc 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.722 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.722 [2024-11-15 11:27:38.670035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.722 [2024-11-15 11:27:38.670124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.722 [2024-11-15 11:27:38.670160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.722 [2024-11-15 11:27:38.670197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.981 [2024-11-15 11:27:38.673462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.981 [2024-11-15 11:27:38.673512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.981 BaseBdev1 00:15:55.981 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 BaseBdev2_malloc 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 [2024-11-15 11:27:38.729226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:55.982 [2024-11-15 11:27:38.729332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.982 [2024-11-15 11:27:38.729367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.982 [2024-11-15 11:27:38.729402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.982 [2024-11-15 11:27:38.732450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.982 [2024-11-15 11:27:38.732501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.982 BaseBdev2 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 BaseBdev3_malloc 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 [2024-11-15 11:27:38.799716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:55.982 [2024-11-15 11:27:38.799816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.982 [2024-11-15 11:27:38.799849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.982 [2024-11-15 11:27:38.799868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.982 [2024-11-15 11:27:38.802988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.982 [2024-11-15 11:27:38.803069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:55.982 BaseBdev3 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 BaseBdev4_malloc 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 [2024-11-15 11:27:38.861233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:55.982 [2024-11-15 11:27:38.861326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.982 [2024-11-15 11:27:38.861359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:55.982 [2024-11-15 11:27:38.861378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.982 [2024-11-15 11:27:38.864371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.982 [2024-11-15 11:27:38.864423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:55.982 BaseBdev4 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 spare_malloc 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 spare_delay 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.982 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.982 [2024-11-15 11:27:38.928082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.982 [2024-11-15 11:27:38.928151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.982 [2024-11-15 11:27:38.928193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:55.982 [2024-11-15 11:27:38.928216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.242 [2024-11-15 11:27:38.931260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.242 [2024-11-15 11:27:38.931325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:56.242 spare 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.242 [2024-11-15 11:27:38.936223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.242 [2024-11-15 11:27:38.938944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.242 [2024-11-15 11:27:38.939068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.242 [2024-11-15 11:27:38.939161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.242 [2024-11-15 11:27:38.939479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:56.242 [2024-11-15 11:27:38.939531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:56.242 [2024-11-15 11:27:38.939858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:56.242 [2024-11-15 11:27:38.940134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:56.242 [2024-11-15 11:27:38.940159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:56.242 [2024-11-15 11:27:38.940460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.242 "name": "raid_bdev1", 00:15:56.242 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:15:56.242 "strip_size_kb": 0, 00:15:56.242 "state": "online", 00:15:56.242 "raid_level": "raid1", 00:15:56.242 "superblock": true, 00:15:56.242 "num_base_bdevs": 4, 00:15:56.242 "num_base_bdevs_discovered": 4, 00:15:56.242 "num_base_bdevs_operational": 4, 00:15:56.242 "base_bdevs_list": [ 00:15:56.242 { 00:15:56.242 "name": "BaseBdev1", 00:15:56.242 "uuid": "06dab298-6d3c-51ae-9c5c-98407b10702b", 00:15:56.242 "is_configured": true, 00:15:56.242 "data_offset": 2048, 00:15:56.242 "data_size": 63488 00:15:56.242 }, 00:15:56.242 { 00:15:56.242 "name": "BaseBdev2", 00:15:56.242 "uuid": "69fff842-0c70-54a7-8fa2-83f96ee986eb", 00:15:56.242 "is_configured": true, 00:15:56.242 "data_offset": 2048, 00:15:56.242 "data_size": 63488 00:15:56.242 }, 00:15:56.242 { 00:15:56.242 "name": "BaseBdev3", 00:15:56.242 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:15:56.242 "is_configured": true, 00:15:56.242 "data_offset": 2048, 00:15:56.242 "data_size": 63488 00:15:56.242 }, 00:15:56.242 { 00:15:56.242 "name": "BaseBdev4", 00:15:56.242 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:15:56.242 "is_configured": true, 00:15:56.242 "data_offset": 2048, 00:15:56.242 "data_size": 63488 00:15:56.242 } 00:15:56.242 ] 00:15:56.242 }' 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.242 11:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.501 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.501 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:56.501 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.501 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.501 [2024-11-15 11:27:39.449118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.760 [2024-11-15 11:27:39.548626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.760 "name": "raid_bdev1", 00:15:56.760 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:15:56.760 "strip_size_kb": 0, 00:15:56.760 "state": "online", 00:15:56.760 "raid_level": "raid1", 00:15:56.760 "superblock": true, 00:15:56.760 "num_base_bdevs": 4, 00:15:56.760 "num_base_bdevs_discovered": 3, 00:15:56.760 "num_base_bdevs_operational": 3, 00:15:56.760 "base_bdevs_list": [ 00:15:56.760 { 00:15:56.760 "name": null, 00:15:56.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.760 "is_configured": false, 00:15:56.760 "data_offset": 0, 00:15:56.760 "data_size": 63488 00:15:56.760 }, 00:15:56.760 { 00:15:56.760 "name": "BaseBdev2", 00:15:56.760 "uuid": "69fff842-0c70-54a7-8fa2-83f96ee986eb", 00:15:56.760 "is_configured": true, 00:15:56.760 "data_offset": 2048, 00:15:56.760 "data_size": 63488 00:15:56.760 }, 00:15:56.760 { 00:15:56.760 "name": "BaseBdev3", 00:15:56.760 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:15:56.760 "is_configured": true, 00:15:56.760 "data_offset": 2048, 00:15:56.760 "data_size": 63488 00:15:56.760 }, 00:15:56.760 { 00:15:56.760 "name": "BaseBdev4", 00:15:56.760 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:15:56.760 "is_configured": true, 00:15:56.760 "data_offset": 2048, 00:15:56.760 "data_size": 63488 00:15:56.760 } 00:15:56.760 ] 00:15:56.760 }' 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.760 11:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.760 [2024-11-15 11:27:39.657040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:56.760 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:56.760 Zero copy mechanism will not be used. 00:15:56.760 Running I/O for 60 seconds... 00:15:57.328 11:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.328 11:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.328 11:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.328 [2024-11-15 11:27:40.085915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.328 11:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.328 11:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.328 [2024-11-15 11:27:40.166008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:57.328 [2024-11-15 11:27:40.168776] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.587 [2024-11-15 11:27:40.286097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:57.587 [2024-11-15 11:27:40.288262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:58.104 143.00 IOPS, 429.00 MiB/s [2024-11-15T11:27:41.054Z] [2024-11-15 11:27:40.859290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:58.104 [2024-11-15 11:27:40.859883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:58.104 [2024-11-15 11:27:41.019234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.363 "name": "raid_bdev1", 00:15:58.363 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:15:58.363 "strip_size_kb": 0, 00:15:58.363 "state": "online", 00:15:58.363 "raid_level": "raid1", 00:15:58.363 "superblock": true, 00:15:58.363 "num_base_bdevs": 4, 00:15:58.363 "num_base_bdevs_discovered": 4, 00:15:58.363 "num_base_bdevs_operational": 4, 00:15:58.363 "process": { 00:15:58.363 "type": "rebuild", 00:15:58.363 "target": "spare", 00:15:58.363 "progress": { 00:15:58.363 "blocks": 10240, 00:15:58.363 "percent": 16 00:15:58.363 } 00:15:58.363 }, 00:15:58.363 "base_bdevs_list": [ 00:15:58.363 { 00:15:58.363 "name": "spare", 00:15:58.363 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:15:58.363 "is_configured": true, 00:15:58.363 "data_offset": 2048, 00:15:58.363 "data_size": 63488 00:15:58.363 }, 00:15:58.363 { 00:15:58.363 "name": "BaseBdev2", 00:15:58.363 "uuid": "69fff842-0c70-54a7-8fa2-83f96ee986eb", 00:15:58.363 "is_configured": true, 00:15:58.363 "data_offset": 2048, 00:15:58.363 "data_size": 63488 00:15:58.363 }, 00:15:58.363 { 00:15:58.363 "name": "BaseBdev3", 00:15:58.363 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:15:58.363 "is_configured": true, 00:15:58.363 "data_offset": 2048, 00:15:58.363 "data_size": 63488 00:15:58.363 }, 00:15:58.363 { 00:15:58.363 "name": "BaseBdev4", 00:15:58.363 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:15:58.363 "is_configured": true, 00:15:58.363 "data_offset": 2048, 00:15:58.363 "data_size": 63488 00:15:58.363 } 00:15:58.363 ] 00:15:58.363 }' 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.363 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.623 [2024-11-15 11:27:41.321552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.623 [2024-11-15 11:27:41.383828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:58.623 [2024-11-15 11:27:41.494036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.623 [2024-11-15 11:27:41.507772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.623 [2024-11-15 11:27:41.507840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.623 [2024-11-15 11:27:41.507867] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.623 [2024-11-15 11:27:41.539298] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.623 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.882 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.882 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.882 "name": "raid_bdev1", 00:15:58.882 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:15:58.882 "strip_size_kb": 0, 00:15:58.882 "state": "online", 00:15:58.882 "raid_level": "raid1", 00:15:58.882 "superblock": true, 00:15:58.882 "num_base_bdevs": 4, 00:15:58.882 "num_base_bdevs_discovered": 3, 00:15:58.882 "num_base_bdevs_operational": 3, 00:15:58.882 "base_bdevs_list": [ 00:15:58.882 { 00:15:58.882 "name": null, 00:15:58.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.882 "is_configured": false, 00:15:58.882 "data_offset": 0, 00:15:58.882 "data_size": 63488 00:15:58.882 }, 00:15:58.882 { 00:15:58.882 "name": "BaseBdev2", 00:15:58.882 "uuid": "69fff842-0c70-54a7-8fa2-83f96ee986eb", 00:15:58.882 "is_configured": true, 00:15:58.882 "data_offset": 2048, 00:15:58.882 "data_size": 63488 00:15:58.882 }, 00:15:58.882 { 00:15:58.882 "name": "BaseBdev3", 00:15:58.882 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:15:58.882 "is_configured": true, 00:15:58.882 "data_offset": 2048, 00:15:58.882 "data_size": 63488 00:15:58.882 }, 00:15:58.882 { 00:15:58.882 "name": "BaseBdev4", 00:15:58.882 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:15:58.882 "is_configured": true, 00:15:58.882 "data_offset": 2048, 00:15:58.882 "data_size": 63488 00:15:58.882 } 00:15:58.882 ] 00:15:58.882 }' 00:15:58.882 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.882 11:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.450 121.50 IOPS, 364.50 MiB/s [2024-11-15T11:27:42.400Z] 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.450 "name": "raid_bdev1", 00:15:59.450 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:15:59.450 "strip_size_kb": 0, 00:15:59.450 "state": "online", 00:15:59.450 "raid_level": "raid1", 00:15:59.450 "superblock": true, 00:15:59.450 "num_base_bdevs": 4, 00:15:59.450 "num_base_bdevs_discovered": 3, 00:15:59.450 "num_base_bdevs_operational": 3, 00:15:59.450 "base_bdevs_list": [ 00:15:59.450 { 00:15:59.450 "name": null, 00:15:59.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.450 "is_configured": false, 00:15:59.450 "data_offset": 0, 00:15:59.450 "data_size": 63488 00:15:59.450 }, 00:15:59.450 { 00:15:59.450 "name": "BaseBdev2", 00:15:59.450 "uuid": "69fff842-0c70-54a7-8fa2-83f96ee986eb", 00:15:59.450 "is_configured": true, 00:15:59.450 "data_offset": 2048, 00:15:59.450 "data_size": 63488 00:15:59.450 }, 00:15:59.450 { 00:15:59.450 "name": "BaseBdev3", 00:15:59.450 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:15:59.450 "is_configured": true, 00:15:59.450 "data_offset": 2048, 00:15:59.450 "data_size": 63488 00:15:59.450 }, 00:15:59.450 { 00:15:59.450 "name": "BaseBdev4", 00:15:59.450 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:15:59.450 "is_configured": true, 00:15:59.450 "data_offset": 2048, 00:15:59.450 "data_size": 63488 00:15:59.450 } 00:15:59.450 ] 00:15:59.450 }' 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.450 [2024-11-15 11:27:42.298212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.450 11:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:59.450 [2024-11-15 11:27:42.383986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:59.450 [2024-11-15 11:27:42.387093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.708 [2024-11-15 11:27:42.500405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:59.708 [2024-11-15 11:27:42.502666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:59.968 132.33 IOPS, 397.00 MiB/s [2024-11-15T11:27:42.918Z] [2024-11-15 11:27:42.725671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:59.968 [2024-11-15 11:27:42.726231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:00.227 [2024-11-15 11:27:42.982266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:00.227 [2024-11-15 11:27:43.095964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:00.227 [2024-11-15 11:27:43.096525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.486 [2024-11-15 11:27:43.356071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:00.486 [2024-11-15 11:27:43.358533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.486 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.486 "name": "raid_bdev1", 00:16:00.486 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:00.486 "strip_size_kb": 0, 00:16:00.486 "state": "online", 00:16:00.486 "raid_level": "raid1", 00:16:00.486 "superblock": true, 00:16:00.486 "num_base_bdevs": 4, 00:16:00.486 "num_base_bdevs_discovered": 4, 00:16:00.486 "num_base_bdevs_operational": 4, 00:16:00.486 "process": { 00:16:00.486 "type": "rebuild", 00:16:00.486 "target": "spare", 00:16:00.486 "progress": { 00:16:00.486 "blocks": 12288, 00:16:00.486 "percent": 19 00:16:00.486 } 00:16:00.486 }, 00:16:00.486 "base_bdevs_list": [ 00:16:00.486 { 00:16:00.486 "name": "spare", 00:16:00.486 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:00.486 "is_configured": true, 00:16:00.486 "data_offset": 2048, 00:16:00.486 "data_size": 63488 00:16:00.486 }, 00:16:00.486 { 00:16:00.486 "name": "BaseBdev2", 00:16:00.486 "uuid": "69fff842-0c70-54a7-8fa2-83f96ee986eb", 00:16:00.486 "is_configured": true, 00:16:00.486 "data_offset": 2048, 00:16:00.486 "data_size": 63488 00:16:00.486 }, 00:16:00.486 { 00:16:00.486 "name": "BaseBdev3", 00:16:00.486 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:00.486 "is_configured": true, 00:16:00.486 "data_offset": 2048, 00:16:00.486 "data_size": 63488 00:16:00.487 }, 00:16:00.487 { 00:16:00.487 "name": "BaseBdev4", 00:16:00.487 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:00.487 "is_configured": true, 00:16:00.487 "data_offset": 2048, 00:16:00.487 "data_size": 63488 00:16:00.487 } 00:16:00.487 ] 00:16:00.487 }' 00:16:00.487 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:00.746 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.746 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.746 [2024-11-15 11:27:43.524442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.746 [2024-11-15 11:27:43.603239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:00.746 [2024-11-15 11:27:43.604443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:01.005 118.00 IOPS, 354.00 MiB/s [2024-11-15T11:27:43.955Z] [2024-11-15 11:27:43.814551] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:01.005 [2024-11-15 11:27:43.814612] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.005 "name": "raid_bdev1", 00:16:01.005 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:01.005 "strip_size_kb": 0, 00:16:01.005 "state": "online", 00:16:01.005 "raid_level": "raid1", 00:16:01.005 "superblock": true, 00:16:01.005 "num_base_bdevs": 4, 00:16:01.005 "num_base_bdevs_discovered": 3, 00:16:01.005 "num_base_bdevs_operational": 3, 00:16:01.005 "process": { 00:16:01.005 "type": "rebuild", 00:16:01.005 "target": "spare", 00:16:01.005 "progress": { 00:16:01.005 "blocks": 16384, 00:16:01.005 "percent": 25 00:16:01.005 } 00:16:01.005 }, 00:16:01.005 "base_bdevs_list": [ 00:16:01.005 { 00:16:01.005 "name": "spare", 00:16:01.005 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:01.005 "is_configured": true, 00:16:01.005 "data_offset": 2048, 00:16:01.005 "data_size": 63488 00:16:01.005 }, 00:16:01.005 { 00:16:01.005 "name": null, 00:16:01.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.005 "is_configured": false, 00:16:01.005 "data_offset": 0, 00:16:01.005 "data_size": 63488 00:16:01.005 }, 00:16:01.005 { 00:16:01.005 "name": "BaseBdev3", 00:16:01.005 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:01.005 "is_configured": true, 00:16:01.005 "data_offset": 2048, 00:16:01.005 "data_size": 63488 00:16:01.005 }, 00:16:01.005 { 00:16:01.005 "name": "BaseBdev4", 00:16:01.005 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:01.005 "is_configured": true, 00:16:01.005 "data_offset": 2048, 00:16:01.005 "data_size": 63488 00:16:01.005 } 00:16:01.005 ] 00:16:01.005 }' 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.005 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=540 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.264 11:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.264 11:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.264 11:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.264 "name": "raid_bdev1", 00:16:01.264 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:01.264 "strip_size_kb": 0, 00:16:01.264 "state": "online", 00:16:01.264 "raid_level": "raid1", 00:16:01.264 "superblock": true, 00:16:01.264 "num_base_bdevs": 4, 00:16:01.264 "num_base_bdevs_discovered": 3, 00:16:01.264 "num_base_bdevs_operational": 3, 00:16:01.264 "process": { 00:16:01.264 "type": "rebuild", 00:16:01.264 "target": "spare", 00:16:01.264 "progress": { 00:16:01.264 "blocks": 18432, 00:16:01.264 "percent": 29 00:16:01.264 } 00:16:01.264 }, 00:16:01.264 "base_bdevs_list": [ 00:16:01.264 { 00:16:01.264 "name": "spare", 00:16:01.264 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:01.264 "is_configured": true, 00:16:01.264 "data_offset": 2048, 00:16:01.264 "data_size": 63488 00:16:01.264 }, 00:16:01.264 { 00:16:01.264 "name": null, 00:16:01.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.264 "is_configured": false, 00:16:01.264 "data_offset": 0, 00:16:01.264 "data_size": 63488 00:16:01.264 }, 00:16:01.264 { 00:16:01.264 "name": "BaseBdev3", 00:16:01.264 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:01.264 "is_configured": true, 00:16:01.264 "data_offset": 2048, 00:16:01.264 "data_size": 63488 00:16:01.264 }, 00:16:01.264 { 00:16:01.264 "name": "BaseBdev4", 00:16:01.264 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:01.264 "is_configured": true, 00:16:01.264 "data_offset": 2048, 00:16:01.264 "data_size": 63488 00:16:01.264 } 00:16:01.264 ] 00:16:01.264 }' 00:16:01.264 11:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.264 11:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.264 11:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.264 11:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.264 11:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.851 109.40 IOPS, 328.20 MiB/s [2024-11-15T11:27:44.801Z] [2024-11-15 11:27:44.785491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.430 "name": "raid_bdev1", 00:16:02.430 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:02.430 "strip_size_kb": 0, 00:16:02.430 "state": "online", 00:16:02.430 "raid_level": "raid1", 00:16:02.430 "superblock": true, 00:16:02.430 "num_base_bdevs": 4, 00:16:02.430 "num_base_bdevs_discovered": 3, 00:16:02.430 "num_base_bdevs_operational": 3, 00:16:02.430 "process": { 00:16:02.430 "type": "rebuild", 00:16:02.430 "target": "spare", 00:16:02.430 "progress": { 00:16:02.430 "blocks": 36864, 00:16:02.430 "percent": 58 00:16:02.430 } 00:16:02.430 }, 00:16:02.430 "base_bdevs_list": [ 00:16:02.430 { 00:16:02.430 "name": "spare", 00:16:02.430 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:02.430 "is_configured": true, 00:16:02.430 "data_offset": 2048, 00:16:02.430 "data_size": 63488 00:16:02.430 }, 00:16:02.430 { 00:16:02.430 "name": null, 00:16:02.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.430 "is_configured": false, 00:16:02.430 "data_offset": 0, 00:16:02.430 "data_size": 63488 00:16:02.430 }, 00:16:02.430 { 00:16:02.430 "name": "BaseBdev3", 00:16:02.430 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:02.430 "is_configured": true, 00:16:02.430 "data_offset": 2048, 00:16:02.430 "data_size": 63488 00:16:02.430 }, 00:16:02.430 { 00:16:02.430 "name": "BaseBdev4", 00:16:02.430 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:02.430 "is_configured": true, 00:16:02.430 "data_offset": 2048, 00:16:02.430 "data_size": 63488 00:16:02.430 } 00:16:02.430 ] 00:16:02.430 }' 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.430 [2024-11-15 11:27:45.228057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.430 11:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.688 [2024-11-15 11:27:45.579817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:02.946 99.00 IOPS, 297.00 MiB/s [2024-11-15T11:27:45.896Z] [2024-11-15 11:27:45.681717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:03.204 [2024-11-15 11:27:46.008590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.463 "name": "raid_bdev1", 00:16:03.463 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:03.463 "strip_size_kb": 0, 00:16:03.463 "state": "online", 00:16:03.463 "raid_level": "raid1", 00:16:03.463 "superblock": true, 00:16:03.463 "num_base_bdevs": 4, 00:16:03.463 "num_base_bdevs_discovered": 3, 00:16:03.463 "num_base_bdevs_operational": 3, 00:16:03.463 "process": { 00:16:03.463 "type": "rebuild", 00:16:03.463 "target": "spare", 00:16:03.463 "progress": { 00:16:03.463 "blocks": 57344, 00:16:03.463 "percent": 90 00:16:03.463 } 00:16:03.463 }, 00:16:03.463 "base_bdevs_list": [ 00:16:03.463 { 00:16:03.463 "name": "spare", 00:16:03.463 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:03.463 "is_configured": true, 00:16:03.463 "data_offset": 2048, 00:16:03.463 "data_size": 63488 00:16:03.463 }, 00:16:03.463 { 00:16:03.463 "name": null, 00:16:03.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.463 "is_configured": false, 00:16:03.463 "data_offset": 0, 00:16:03.463 "data_size": 63488 00:16:03.463 }, 00:16:03.463 { 00:16:03.463 "name": "BaseBdev3", 00:16:03.463 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:03.463 "is_configured": true, 00:16:03.463 "data_offset": 2048, 00:16:03.463 "data_size": 63488 00:16:03.463 }, 00:16:03.463 { 00:16:03.463 "name": "BaseBdev4", 00:16:03.463 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:03.463 "is_configured": true, 00:16:03.463 "data_offset": 2048, 00:16:03.463 "data_size": 63488 00:16:03.463 } 00:16:03.463 ] 00:16:03.463 }' 00:16:03.463 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.722 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.722 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.722 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.722 11:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.722 [2024-11-15 11:27:46.577239] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:03.980 89.29 IOPS, 267.86 MiB/s [2024-11-15T11:27:46.930Z] [2024-11-15 11:27:46.683904] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:03.980 [2024-11-15 11:27:46.687832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.546 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.804 "name": "raid_bdev1", 00:16:04.804 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:04.804 "strip_size_kb": 0, 00:16:04.804 "state": "online", 00:16:04.804 "raid_level": "raid1", 00:16:04.804 "superblock": true, 00:16:04.804 "num_base_bdevs": 4, 00:16:04.804 "num_base_bdevs_discovered": 3, 00:16:04.804 "num_base_bdevs_operational": 3, 00:16:04.804 "base_bdevs_list": [ 00:16:04.804 { 00:16:04.804 "name": "spare", 00:16:04.804 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:04.804 "is_configured": true, 00:16:04.804 "data_offset": 2048, 00:16:04.804 "data_size": 63488 00:16:04.804 }, 00:16:04.804 { 00:16:04.804 "name": null, 00:16:04.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.804 "is_configured": false, 00:16:04.804 "data_offset": 0, 00:16:04.804 "data_size": 63488 00:16:04.804 }, 00:16:04.804 { 00:16:04.804 "name": "BaseBdev3", 00:16:04.804 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:04.804 "is_configured": true, 00:16:04.804 "data_offset": 2048, 00:16:04.804 "data_size": 63488 00:16:04.804 }, 00:16:04.804 { 00:16:04.804 "name": "BaseBdev4", 00:16:04.804 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:04.804 "is_configured": true, 00:16:04.804 "data_offset": 2048, 00:16:04.804 "data_size": 63488 00:16:04.804 } 00:16:04.804 ] 00:16:04.804 }' 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.804 83.00 IOPS, 249.00 MiB/s [2024-11-15T11:27:47.754Z] 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.804 "name": "raid_bdev1", 00:16:04.804 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:04.804 "strip_size_kb": 0, 00:16:04.804 "state": "online", 00:16:04.804 "raid_level": "raid1", 00:16:04.804 "superblock": true, 00:16:04.804 "num_base_bdevs": 4, 00:16:04.804 "num_base_bdevs_discovered": 3, 00:16:04.804 "num_base_bdevs_operational": 3, 00:16:04.804 "base_bdevs_list": [ 00:16:04.804 { 00:16:04.804 "name": "spare", 00:16:04.804 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:04.804 "is_configured": true, 00:16:04.804 "data_offset": 2048, 00:16:04.804 "data_size": 63488 00:16:04.804 }, 00:16:04.804 { 00:16:04.804 "name": null, 00:16:04.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.804 "is_configured": false, 00:16:04.804 "data_offset": 0, 00:16:04.804 "data_size": 63488 00:16:04.804 }, 00:16:04.804 { 00:16:04.804 "name": "BaseBdev3", 00:16:04.804 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:04.804 "is_configured": true, 00:16:04.804 "data_offset": 2048, 00:16:04.804 "data_size": 63488 00:16:04.804 }, 00:16:04.804 { 00:16:04.804 "name": "BaseBdev4", 00:16:04.804 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:04.804 "is_configured": true, 00:16:04.804 "data_offset": 2048, 00:16:04.804 "data_size": 63488 00:16:04.804 } 00:16:04.804 ] 00:16:04.804 }' 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.804 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.063 "name": "raid_bdev1", 00:16:05.063 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:05.063 "strip_size_kb": 0, 00:16:05.063 "state": "online", 00:16:05.063 "raid_level": "raid1", 00:16:05.063 "superblock": true, 00:16:05.063 "num_base_bdevs": 4, 00:16:05.063 "num_base_bdevs_discovered": 3, 00:16:05.063 "num_base_bdevs_operational": 3, 00:16:05.063 "base_bdevs_list": [ 00:16:05.063 { 00:16:05.063 "name": "spare", 00:16:05.063 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:05.063 "is_configured": true, 00:16:05.063 "data_offset": 2048, 00:16:05.063 "data_size": 63488 00:16:05.063 }, 00:16:05.063 { 00:16:05.063 "name": null, 00:16:05.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.063 "is_configured": false, 00:16:05.063 "data_offset": 0, 00:16:05.063 "data_size": 63488 00:16:05.063 }, 00:16:05.063 { 00:16:05.063 "name": "BaseBdev3", 00:16:05.063 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:05.063 "is_configured": true, 00:16:05.063 "data_offset": 2048, 00:16:05.063 "data_size": 63488 00:16:05.063 }, 00:16:05.063 { 00:16:05.063 "name": "BaseBdev4", 00:16:05.063 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:05.063 "is_configured": true, 00:16:05.063 "data_offset": 2048, 00:16:05.063 "data_size": 63488 00:16:05.063 } 00:16:05.063 ] 00:16:05.063 }' 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.063 11:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.631 [2024-11-15 11:27:48.329122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.631 [2024-11-15 11:27:48.329164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.631 00:16:05.631 Latency(us) 00:16:05.631 [2024-11-15T11:27:48.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.631 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:05.631 raid_bdev1 : 8.70 79.42 238.25 0.00 0.00 18019.76 275.55 124875.87 00:16:05.631 [2024-11-15T11:27:48.581Z] =================================================================================================================== 00:16:05.631 [2024-11-15T11:27:48.581Z] Total : 79.42 238.25 0.00 0.00 18019.76 275.55 124875.87 00:16:05.631 [2024-11-15 11:27:48.379393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.631 [2024-11-15 11:27:48.379635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.631 [2024-11-15 11:27:48.379804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.631 [2024-11-15 11:27:48.379956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:05.631 { 00:16:05.631 "results": [ 00:16:05.631 { 00:16:05.631 "job": "raid_bdev1", 00:16:05.631 "core_mask": "0x1", 00:16:05.631 "workload": "randrw", 00:16:05.631 "percentage": 50, 00:16:05.631 "status": "finished", 00:16:05.631 "queue_depth": 2, 00:16:05.631 "io_size": 3145728, 00:16:05.631 "runtime": 8.701078, 00:16:05.631 "iops": 79.4154471434459, 00:16:05.631 "mibps": 238.2463414303377, 00:16:05.631 "io_failed": 0, 00:16:05.631 "io_timeout": 0, 00:16:05.631 "avg_latency_us": 18019.759505328246, 00:16:05.631 "min_latency_us": 275.5490909090909, 00:16:05.631 "max_latency_us": 124875.8690909091 00:16:05.631 } 00:16:05.631 ], 00:16:05.631 "core_count": 1 00:16:05.631 } 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.631 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:05.890 /dev/nbd0 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.890 1+0 records in 00:16:05.890 1+0 records out 00:16:05.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300367 s, 13.6 MB/s 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.890 11:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:06.457 /dev/nbd1 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.457 1+0 records in 00:16:06.457 1+0 records out 00:16:06.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547349 s, 7.5 MB/s 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.457 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.717 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:06.975 /dev/nbd1 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.234 1+0 records in 00:16:07.234 1+0 records out 00:16:07.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546222 s, 7.5 MB/s 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.234 11:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:07.234 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:07.234 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.234 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:07.234 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.234 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:07.234 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.234 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.493 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.752 [2024-11-15 11:27:50.594452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.752 [2024-11-15 11:27:50.594523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.752 [2024-11-15 11:27:50.594564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:07.752 [2024-11-15 11:27:50.594582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.752 [2024-11-15 11:27:50.597877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.752 [2024-11-15 11:27:50.597948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.752 [2024-11-15 11:27:50.598082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:07.752 [2024-11-15 11:27:50.598177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.752 [2024-11-15 11:27:50.598387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.752 [2024-11-15 11:27:50.598604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:07.752 spare 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.752 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.752 [2024-11-15 11:27:50.698791] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:07.752 [2024-11-15 11:27:50.698994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:07.752 [2024-11-15 11:27:50.699368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:07.752 [2024-11-15 11:27:50.699673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:07.752 [2024-11-15 11:27:50.699690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:07.752 [2024-11-15 11:27:50.699898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.011 "name": "raid_bdev1", 00:16:08.011 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:08.011 "strip_size_kb": 0, 00:16:08.011 "state": "online", 00:16:08.011 "raid_level": "raid1", 00:16:08.011 "superblock": true, 00:16:08.011 "num_base_bdevs": 4, 00:16:08.011 "num_base_bdevs_discovered": 3, 00:16:08.011 "num_base_bdevs_operational": 3, 00:16:08.011 "base_bdevs_list": [ 00:16:08.011 { 00:16:08.011 "name": "spare", 00:16:08.011 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:08.011 "is_configured": true, 00:16:08.011 "data_offset": 2048, 00:16:08.011 "data_size": 63488 00:16:08.011 }, 00:16:08.011 { 00:16:08.011 "name": null, 00:16:08.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.011 "is_configured": false, 00:16:08.011 "data_offset": 2048, 00:16:08.011 "data_size": 63488 00:16:08.011 }, 00:16:08.011 { 00:16:08.011 "name": "BaseBdev3", 00:16:08.011 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:08.011 "is_configured": true, 00:16:08.011 "data_offset": 2048, 00:16:08.011 "data_size": 63488 00:16:08.011 }, 00:16:08.011 { 00:16:08.011 "name": "BaseBdev4", 00:16:08.011 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:08.011 "is_configured": true, 00:16:08.011 "data_offset": 2048, 00:16:08.011 "data_size": 63488 00:16:08.011 } 00:16:08.011 ] 00:16:08.011 }' 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.011 11:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.580 "name": "raid_bdev1", 00:16:08.580 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:08.580 "strip_size_kb": 0, 00:16:08.580 "state": "online", 00:16:08.580 "raid_level": "raid1", 00:16:08.580 "superblock": true, 00:16:08.580 "num_base_bdevs": 4, 00:16:08.580 "num_base_bdevs_discovered": 3, 00:16:08.580 "num_base_bdevs_operational": 3, 00:16:08.580 "base_bdevs_list": [ 00:16:08.580 { 00:16:08.580 "name": "spare", 00:16:08.580 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:08.580 "is_configured": true, 00:16:08.580 "data_offset": 2048, 00:16:08.580 "data_size": 63488 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": null, 00:16:08.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.580 "is_configured": false, 00:16:08.580 "data_offset": 2048, 00:16:08.580 "data_size": 63488 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": "BaseBdev3", 00:16:08.580 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:08.580 "is_configured": true, 00:16:08.580 "data_offset": 2048, 00:16:08.580 "data_size": 63488 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": "BaseBdev4", 00:16:08.580 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:08.580 "is_configured": true, 00:16:08.580 "data_offset": 2048, 00:16:08.580 "data_size": 63488 00:16:08.580 } 00:16:08.580 ] 00:16:08.580 }' 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.580 [2024-11-15 11:27:51.459089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.580 "name": "raid_bdev1", 00:16:08.580 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:08.580 "strip_size_kb": 0, 00:16:08.580 "state": "online", 00:16:08.580 "raid_level": "raid1", 00:16:08.580 "superblock": true, 00:16:08.580 "num_base_bdevs": 4, 00:16:08.580 "num_base_bdevs_discovered": 2, 00:16:08.580 "num_base_bdevs_operational": 2, 00:16:08.580 "base_bdevs_list": [ 00:16:08.580 { 00:16:08.580 "name": null, 00:16:08.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.580 "is_configured": false, 00:16:08.580 "data_offset": 0, 00:16:08.580 "data_size": 63488 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": null, 00:16:08.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.580 "is_configured": false, 00:16:08.580 "data_offset": 2048, 00:16:08.580 "data_size": 63488 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": "BaseBdev3", 00:16:08.580 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:08.580 "is_configured": true, 00:16:08.580 "data_offset": 2048, 00:16:08.580 "data_size": 63488 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": "BaseBdev4", 00:16:08.580 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:08.580 "is_configured": true, 00:16:08.580 "data_offset": 2048, 00:16:08.580 "data_size": 63488 00:16:08.580 } 00:16:08.580 ] 00:16:08.580 }' 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.580 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.148 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.148 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.148 11:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.148 [2024-11-15 11:27:52.003368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.148 [2024-11-15 11:27:52.003744] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:09.148 [2024-11-15 11:27:52.003780] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.148 [2024-11-15 11:27:52.003853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.148 [2024-11-15 11:27:52.018259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:09.148 11:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.148 11:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:09.148 [2024-11-15 11:27:52.021023] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.085 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.344 "name": "raid_bdev1", 00:16:10.344 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:10.344 "strip_size_kb": 0, 00:16:10.344 "state": "online", 00:16:10.344 "raid_level": "raid1", 00:16:10.344 "superblock": true, 00:16:10.344 "num_base_bdevs": 4, 00:16:10.344 "num_base_bdevs_discovered": 3, 00:16:10.344 "num_base_bdevs_operational": 3, 00:16:10.344 "process": { 00:16:10.344 "type": "rebuild", 00:16:10.344 "target": "spare", 00:16:10.344 "progress": { 00:16:10.344 "blocks": 20480, 00:16:10.344 "percent": 32 00:16:10.344 } 00:16:10.344 }, 00:16:10.344 "base_bdevs_list": [ 00:16:10.344 { 00:16:10.344 "name": "spare", 00:16:10.344 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:10.344 "is_configured": true, 00:16:10.344 "data_offset": 2048, 00:16:10.344 "data_size": 63488 00:16:10.344 }, 00:16:10.344 { 00:16:10.344 "name": null, 00:16:10.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.344 "is_configured": false, 00:16:10.344 "data_offset": 2048, 00:16:10.344 "data_size": 63488 00:16:10.344 }, 00:16:10.344 { 00:16:10.344 "name": "BaseBdev3", 00:16:10.344 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:10.344 "is_configured": true, 00:16:10.344 "data_offset": 2048, 00:16:10.344 "data_size": 63488 00:16:10.344 }, 00:16:10.344 { 00:16:10.344 "name": "BaseBdev4", 00:16:10.344 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:10.344 "is_configured": true, 00:16:10.344 "data_offset": 2048, 00:16:10.344 "data_size": 63488 00:16:10.344 } 00:16:10.344 ] 00:16:10.344 }' 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.344 [2024-11-15 11:27:53.199648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.344 [2024-11-15 11:27:53.231467] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.344 [2024-11-15 11:27:53.231750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.344 [2024-11-15 11:27:53.231994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.344 [2024-11-15 11:27:53.232055] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.344 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.665 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.665 "name": "raid_bdev1", 00:16:10.665 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:10.665 "strip_size_kb": 0, 00:16:10.665 "state": "online", 00:16:10.665 "raid_level": "raid1", 00:16:10.665 "superblock": true, 00:16:10.665 "num_base_bdevs": 4, 00:16:10.665 "num_base_bdevs_discovered": 2, 00:16:10.665 "num_base_bdevs_operational": 2, 00:16:10.665 "base_bdevs_list": [ 00:16:10.665 { 00:16:10.665 "name": null, 00:16:10.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.665 "is_configured": false, 00:16:10.665 "data_offset": 0, 00:16:10.665 "data_size": 63488 00:16:10.665 }, 00:16:10.665 { 00:16:10.665 "name": null, 00:16:10.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.665 "is_configured": false, 00:16:10.665 "data_offset": 2048, 00:16:10.665 "data_size": 63488 00:16:10.665 }, 00:16:10.665 { 00:16:10.665 "name": "BaseBdev3", 00:16:10.665 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:10.665 "is_configured": true, 00:16:10.665 "data_offset": 2048, 00:16:10.665 "data_size": 63488 00:16:10.665 }, 00:16:10.665 { 00:16:10.665 "name": "BaseBdev4", 00:16:10.665 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:10.665 "is_configured": true, 00:16:10.665 "data_offset": 2048, 00:16:10.665 "data_size": 63488 00:16:10.665 } 00:16:10.665 ] 00:16:10.665 }' 00:16:10.665 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.665 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.937 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.937 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.937 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.937 [2024-11-15 11:27:53.815879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.937 [2024-11-15 11:27:53.815993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.937 [2024-11-15 11:27:53.816039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:10.937 [2024-11-15 11:27:53.816058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.937 [2024-11-15 11:27:53.816810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.937 [2024-11-15 11:27:53.816847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.937 [2024-11-15 11:27:53.817022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.937 [2024-11-15 11:27:53.817049] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:10.937 [2024-11-15 11:27:53.817063] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:10.937 [2024-11-15 11:27:53.817102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.937 [2024-11-15 11:27:53.832064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:10.937 spare 00:16:10.937 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.937 11:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:10.937 [2024-11-15 11:27:53.834947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.313 "name": "raid_bdev1", 00:16:12.313 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:12.313 "strip_size_kb": 0, 00:16:12.313 "state": "online", 00:16:12.313 "raid_level": "raid1", 00:16:12.313 "superblock": true, 00:16:12.313 "num_base_bdevs": 4, 00:16:12.313 "num_base_bdevs_discovered": 3, 00:16:12.313 "num_base_bdevs_operational": 3, 00:16:12.313 "process": { 00:16:12.313 "type": "rebuild", 00:16:12.313 "target": "spare", 00:16:12.313 "progress": { 00:16:12.313 "blocks": 20480, 00:16:12.313 "percent": 32 00:16:12.313 } 00:16:12.313 }, 00:16:12.313 "base_bdevs_list": [ 00:16:12.313 { 00:16:12.313 "name": "spare", 00:16:12.313 "uuid": "3cfc6590-6040-589b-b70e-b30cda032a98", 00:16:12.313 "is_configured": true, 00:16:12.313 "data_offset": 2048, 00:16:12.313 "data_size": 63488 00:16:12.313 }, 00:16:12.313 { 00:16:12.313 "name": null, 00:16:12.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.313 "is_configured": false, 00:16:12.313 "data_offset": 2048, 00:16:12.313 "data_size": 63488 00:16:12.313 }, 00:16:12.313 { 00:16:12.313 "name": "BaseBdev3", 00:16:12.313 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:12.313 "is_configured": true, 00:16:12.313 "data_offset": 2048, 00:16:12.313 "data_size": 63488 00:16:12.313 }, 00:16:12.313 { 00:16:12.313 "name": "BaseBdev4", 00:16:12.313 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:12.313 "is_configured": true, 00:16:12.313 "data_offset": 2048, 00:16:12.313 "data_size": 63488 00:16:12.313 } 00:16:12.313 ] 00:16:12.313 }' 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.313 11:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.313 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.313 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.313 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.313 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.313 [2024-11-15 11:27:55.009226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.313 [2024-11-15 11:27:55.045983] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.314 [2024-11-15 11:27:55.046096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.314 [2024-11-15 11:27:55.046131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.314 [2024-11-15 11:27:55.046143] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.314 "name": "raid_bdev1", 00:16:12.314 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:12.314 "strip_size_kb": 0, 00:16:12.314 "state": "online", 00:16:12.314 "raid_level": "raid1", 00:16:12.314 "superblock": true, 00:16:12.314 "num_base_bdevs": 4, 00:16:12.314 "num_base_bdevs_discovered": 2, 00:16:12.314 "num_base_bdevs_operational": 2, 00:16:12.314 "base_bdevs_list": [ 00:16:12.314 { 00:16:12.314 "name": null, 00:16:12.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.314 "is_configured": false, 00:16:12.314 "data_offset": 0, 00:16:12.314 "data_size": 63488 00:16:12.314 }, 00:16:12.314 { 00:16:12.314 "name": null, 00:16:12.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.314 "is_configured": false, 00:16:12.314 "data_offset": 2048, 00:16:12.314 "data_size": 63488 00:16:12.314 }, 00:16:12.314 { 00:16:12.314 "name": "BaseBdev3", 00:16:12.314 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:12.314 "is_configured": true, 00:16:12.314 "data_offset": 2048, 00:16:12.314 "data_size": 63488 00:16:12.314 }, 00:16:12.314 { 00:16:12.314 "name": "BaseBdev4", 00:16:12.314 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:12.314 "is_configured": true, 00:16:12.314 "data_offset": 2048, 00:16:12.314 "data_size": 63488 00:16:12.314 } 00:16:12.314 ] 00:16:12.314 }' 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.314 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.882 "name": "raid_bdev1", 00:16:12.882 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:12.882 "strip_size_kb": 0, 00:16:12.882 "state": "online", 00:16:12.882 "raid_level": "raid1", 00:16:12.882 "superblock": true, 00:16:12.882 "num_base_bdevs": 4, 00:16:12.882 "num_base_bdevs_discovered": 2, 00:16:12.882 "num_base_bdevs_operational": 2, 00:16:12.882 "base_bdevs_list": [ 00:16:12.882 { 00:16:12.882 "name": null, 00:16:12.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.882 "is_configured": false, 00:16:12.882 "data_offset": 0, 00:16:12.882 "data_size": 63488 00:16:12.882 }, 00:16:12.882 { 00:16:12.882 "name": null, 00:16:12.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.882 "is_configured": false, 00:16:12.882 "data_offset": 2048, 00:16:12.882 "data_size": 63488 00:16:12.882 }, 00:16:12.882 { 00:16:12.882 "name": "BaseBdev3", 00:16:12.882 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:12.882 "is_configured": true, 00:16:12.882 "data_offset": 2048, 00:16:12.882 "data_size": 63488 00:16:12.882 }, 00:16:12.882 { 00:16:12.882 "name": "BaseBdev4", 00:16:12.882 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:12.882 "is_configured": true, 00:16:12.882 "data_offset": 2048, 00:16:12.882 "data_size": 63488 00:16:12.882 } 00:16:12.882 ] 00:16:12.882 }' 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.882 [2024-11-15 11:27:55.760499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:12.882 [2024-11-15 11:27:55.760614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.882 [2024-11-15 11:27:55.760649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:12.882 [2024-11-15 11:27:55.760664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.882 [2024-11-15 11:27:55.761310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.882 [2024-11-15 11:27:55.761341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:12.882 [2024-11-15 11:27:55.761449] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:12.882 [2024-11-15 11:27:55.761477] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:12.882 [2024-11-15 11:27:55.761507] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.882 [2024-11-15 11:27:55.761537] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:12.882 BaseBdev1 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.882 11:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.257 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.258 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.258 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.258 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.258 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.258 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.258 "name": "raid_bdev1", 00:16:14.258 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:14.258 "strip_size_kb": 0, 00:16:14.258 "state": "online", 00:16:14.258 "raid_level": "raid1", 00:16:14.258 "superblock": true, 00:16:14.258 "num_base_bdevs": 4, 00:16:14.258 "num_base_bdevs_discovered": 2, 00:16:14.258 "num_base_bdevs_operational": 2, 00:16:14.258 "base_bdevs_list": [ 00:16:14.258 { 00:16:14.258 "name": null, 00:16:14.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.258 "is_configured": false, 00:16:14.258 "data_offset": 0, 00:16:14.258 "data_size": 63488 00:16:14.258 }, 00:16:14.258 { 00:16:14.258 "name": null, 00:16:14.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.258 "is_configured": false, 00:16:14.258 "data_offset": 2048, 00:16:14.258 "data_size": 63488 00:16:14.258 }, 00:16:14.258 { 00:16:14.258 "name": "BaseBdev3", 00:16:14.258 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:14.258 "is_configured": true, 00:16:14.258 "data_offset": 2048, 00:16:14.258 "data_size": 63488 00:16:14.258 }, 00:16:14.258 { 00:16:14.258 "name": "BaseBdev4", 00:16:14.258 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:14.258 "is_configured": true, 00:16:14.258 "data_offset": 2048, 00:16:14.258 "data_size": 63488 00:16:14.258 } 00:16:14.258 ] 00:16:14.258 }' 00:16:14.258 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.258 11:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.516 "name": "raid_bdev1", 00:16:14.516 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:14.516 "strip_size_kb": 0, 00:16:14.516 "state": "online", 00:16:14.516 "raid_level": "raid1", 00:16:14.516 "superblock": true, 00:16:14.516 "num_base_bdevs": 4, 00:16:14.516 "num_base_bdevs_discovered": 2, 00:16:14.516 "num_base_bdevs_operational": 2, 00:16:14.516 "base_bdevs_list": [ 00:16:14.516 { 00:16:14.516 "name": null, 00:16:14.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.516 "is_configured": false, 00:16:14.516 "data_offset": 0, 00:16:14.516 "data_size": 63488 00:16:14.516 }, 00:16:14.516 { 00:16:14.516 "name": null, 00:16:14.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.516 "is_configured": false, 00:16:14.516 "data_offset": 2048, 00:16:14.516 "data_size": 63488 00:16:14.516 }, 00:16:14.516 { 00:16:14.516 "name": "BaseBdev3", 00:16:14.516 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:14.516 "is_configured": true, 00:16:14.516 "data_offset": 2048, 00:16:14.516 "data_size": 63488 00:16:14.516 }, 00:16:14.516 { 00:16:14.516 "name": "BaseBdev4", 00:16:14.516 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:14.516 "is_configured": true, 00:16:14.516 "data_offset": 2048, 00:16:14.516 "data_size": 63488 00:16:14.516 } 00:16:14.516 ] 00:16:14.516 }' 00:16:14.516 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.517 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.517 [2024-11-15 11:27:57.461339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.517 [2024-11-15 11:27:57.461612] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:14.517 [2024-11-15 11:27:57.461637] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:14.776 request: 00:16:14.776 { 00:16:14.776 "base_bdev": "BaseBdev1", 00:16:14.776 "raid_bdev": "raid_bdev1", 00:16:14.776 "method": "bdev_raid_add_base_bdev", 00:16:14.776 "req_id": 1 00:16:14.776 } 00:16:14.776 Got JSON-RPC error response 00:16:14.776 response: 00:16:14.776 { 00:16:14.776 "code": -22, 00:16:14.776 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:14.776 } 00:16:14.776 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:14.776 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:16:14.776 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:14.776 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:14.776 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:14.777 11:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.712 "name": "raid_bdev1", 00:16:15.712 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:15.712 "strip_size_kb": 0, 00:16:15.712 "state": "online", 00:16:15.712 "raid_level": "raid1", 00:16:15.712 "superblock": true, 00:16:15.712 "num_base_bdevs": 4, 00:16:15.712 "num_base_bdevs_discovered": 2, 00:16:15.712 "num_base_bdevs_operational": 2, 00:16:15.712 "base_bdevs_list": [ 00:16:15.712 { 00:16:15.712 "name": null, 00:16:15.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.712 "is_configured": false, 00:16:15.712 "data_offset": 0, 00:16:15.712 "data_size": 63488 00:16:15.712 }, 00:16:15.712 { 00:16:15.712 "name": null, 00:16:15.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.712 "is_configured": false, 00:16:15.712 "data_offset": 2048, 00:16:15.712 "data_size": 63488 00:16:15.712 }, 00:16:15.712 { 00:16:15.712 "name": "BaseBdev3", 00:16:15.712 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:15.712 "is_configured": true, 00:16:15.712 "data_offset": 2048, 00:16:15.712 "data_size": 63488 00:16:15.712 }, 00:16:15.712 { 00:16:15.712 "name": "BaseBdev4", 00:16:15.712 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:15.712 "is_configured": true, 00:16:15.712 "data_offset": 2048, 00:16:15.712 "data_size": 63488 00:16:15.712 } 00:16:15.712 ] 00:16:15.712 }' 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.712 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.280 11:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.280 "name": "raid_bdev1", 00:16:16.280 "uuid": "487a05eb-bea6-4602-a399-732e738624d2", 00:16:16.280 "strip_size_kb": 0, 00:16:16.280 "state": "online", 00:16:16.280 "raid_level": "raid1", 00:16:16.280 "superblock": true, 00:16:16.280 "num_base_bdevs": 4, 00:16:16.280 "num_base_bdevs_discovered": 2, 00:16:16.280 "num_base_bdevs_operational": 2, 00:16:16.280 "base_bdevs_list": [ 00:16:16.280 { 00:16:16.280 "name": null, 00:16:16.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.280 "is_configured": false, 00:16:16.280 "data_offset": 0, 00:16:16.280 "data_size": 63488 00:16:16.280 }, 00:16:16.280 { 00:16:16.280 "name": null, 00:16:16.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.280 "is_configured": false, 00:16:16.280 "data_offset": 2048, 00:16:16.280 "data_size": 63488 00:16:16.280 }, 00:16:16.280 { 00:16:16.280 "name": "BaseBdev3", 00:16:16.280 "uuid": "59f39b88-971c-5938-89a5-897d73807706", 00:16:16.280 "is_configured": true, 00:16:16.280 "data_offset": 2048, 00:16:16.280 "data_size": 63488 00:16:16.280 }, 00:16:16.280 { 00:16:16.280 "name": "BaseBdev4", 00:16:16.280 "uuid": "7bdbebdb-bc1d-5576-a030-49887fece705", 00:16:16.280 "is_configured": true, 00:16:16.280 "data_offset": 2048, 00:16:16.280 "data_size": 63488 00:16:16.280 } 00:16:16.280 ] 00:16:16.280 }' 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79360 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79360 ']' 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79360 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79360 00:16:16.280 killing process with pid 79360 00:16:16.280 Received shutdown signal, test time was about 19.519312 seconds 00:16:16.280 00:16:16.280 Latency(us) 00:16:16.280 [2024-11-15T11:27:59.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.280 [2024-11-15T11:27:59.230Z] =================================================================================================================== 00:16:16.280 [2024-11-15T11:27:59.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79360' 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79360 00:16:16.280 [2024-11-15 11:27:59.179280] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.280 11:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79360 00:16:16.280 [2024-11-15 11:27:59.179456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.280 [2024-11-15 11:27:59.179610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.280 [2024-11-15 11:27:59.179650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:16.848 [2024-11-15 11:27:59.546176] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.786 ************************************ 00:16:17.786 END TEST raid_rebuild_test_sb_io 00:16:17.786 ************************************ 00:16:17.786 11:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:17.786 00:16:17.786 real 0m23.092s 00:16:17.786 user 0m31.319s 00:16:17.786 sys 0m2.521s 00:16:17.786 11:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:17.786 11:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.786 11:28:00 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:17.786 11:28:00 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:17.786 11:28:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:17.786 11:28:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:17.786 11:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.786 ************************************ 00:16:17.786 START TEST raid5f_state_function_test 00:16:17.786 ************************************ 00:16:17.786 11:28:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:16:17.786 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:17.786 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:17.786 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:17.786 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80099 00:16:18.045 Process raid pid: 80099 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80099' 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80099 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80099 ']' 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:18.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:18.045 11:28:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.045 [2024-11-15 11:28:00.856543] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:16:18.045 [2024-11-15 11:28:00.856746] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.304 [2024-11-15 11:28:01.047851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.304 [2024-11-15 11:28:01.194664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.563 [2024-11-15 11:28:01.421667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.563 [2024-11-15 11:28:01.421734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.130 [2024-11-15 11:28:01.841634] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.130 [2024-11-15 11:28:01.841746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.130 [2024-11-15 11:28:01.841764] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.130 [2024-11-15 11:28:01.841781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.130 [2024-11-15 11:28:01.841798] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.130 [2024-11-15 11:28:01.841814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.130 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.131 "name": "Existed_Raid", 00:16:19.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.131 "strip_size_kb": 64, 00:16:19.131 "state": "configuring", 00:16:19.131 "raid_level": "raid5f", 00:16:19.131 "superblock": false, 00:16:19.131 "num_base_bdevs": 3, 00:16:19.131 "num_base_bdevs_discovered": 0, 00:16:19.131 "num_base_bdevs_operational": 3, 00:16:19.131 "base_bdevs_list": [ 00:16:19.131 { 00:16:19.131 "name": "BaseBdev1", 00:16:19.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.131 "is_configured": false, 00:16:19.131 "data_offset": 0, 00:16:19.131 "data_size": 0 00:16:19.131 }, 00:16:19.131 { 00:16:19.131 "name": "BaseBdev2", 00:16:19.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.131 "is_configured": false, 00:16:19.131 "data_offset": 0, 00:16:19.131 "data_size": 0 00:16:19.131 }, 00:16:19.131 { 00:16:19.131 "name": "BaseBdev3", 00:16:19.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.131 "is_configured": false, 00:16:19.131 "data_offset": 0, 00:16:19.131 "data_size": 0 00:16:19.131 } 00:16:19.131 ] 00:16:19.131 }' 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.131 11:28:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.698 [2024-11-15 11:28:02.349771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.698 [2024-11-15 11:28:02.349836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.698 [2024-11-15 11:28:02.361759] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.698 [2024-11-15 11:28:02.361835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.698 [2024-11-15 11:28:02.361851] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.698 [2024-11-15 11:28:02.361868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.698 [2024-11-15 11:28:02.361878] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.698 [2024-11-15 11:28:02.361893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.698 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.699 [2024-11-15 11:28:02.415955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.699 BaseBdev1 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.699 [ 00:16:19.699 { 00:16:19.699 "name": "BaseBdev1", 00:16:19.699 "aliases": [ 00:16:19.699 "5075ab43-d632-4218-b973-a7c8a771baa9" 00:16:19.699 ], 00:16:19.699 "product_name": "Malloc disk", 00:16:19.699 "block_size": 512, 00:16:19.699 "num_blocks": 65536, 00:16:19.699 "uuid": "5075ab43-d632-4218-b973-a7c8a771baa9", 00:16:19.699 "assigned_rate_limits": { 00:16:19.699 "rw_ios_per_sec": 0, 00:16:19.699 "rw_mbytes_per_sec": 0, 00:16:19.699 "r_mbytes_per_sec": 0, 00:16:19.699 "w_mbytes_per_sec": 0 00:16:19.699 }, 00:16:19.699 "claimed": true, 00:16:19.699 "claim_type": "exclusive_write", 00:16:19.699 "zoned": false, 00:16:19.699 "supported_io_types": { 00:16:19.699 "read": true, 00:16:19.699 "write": true, 00:16:19.699 "unmap": true, 00:16:19.699 "flush": true, 00:16:19.699 "reset": true, 00:16:19.699 "nvme_admin": false, 00:16:19.699 "nvme_io": false, 00:16:19.699 "nvme_io_md": false, 00:16:19.699 "write_zeroes": true, 00:16:19.699 "zcopy": true, 00:16:19.699 "get_zone_info": false, 00:16:19.699 "zone_management": false, 00:16:19.699 "zone_append": false, 00:16:19.699 "compare": false, 00:16:19.699 "compare_and_write": false, 00:16:19.699 "abort": true, 00:16:19.699 "seek_hole": false, 00:16:19.699 "seek_data": false, 00:16:19.699 "copy": true, 00:16:19.699 "nvme_iov_md": false 00:16:19.699 }, 00:16:19.699 "memory_domains": [ 00:16:19.699 { 00:16:19.699 "dma_device_id": "system", 00:16:19.699 "dma_device_type": 1 00:16:19.699 }, 00:16:19.699 { 00:16:19.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.699 "dma_device_type": 2 00:16:19.699 } 00:16:19.699 ], 00:16:19.699 "driver_specific": {} 00:16:19.699 } 00:16:19.699 ] 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.699 "name": "Existed_Raid", 00:16:19.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.699 "strip_size_kb": 64, 00:16:19.699 "state": "configuring", 00:16:19.699 "raid_level": "raid5f", 00:16:19.699 "superblock": false, 00:16:19.699 "num_base_bdevs": 3, 00:16:19.699 "num_base_bdevs_discovered": 1, 00:16:19.699 "num_base_bdevs_operational": 3, 00:16:19.699 "base_bdevs_list": [ 00:16:19.699 { 00:16:19.699 "name": "BaseBdev1", 00:16:19.699 "uuid": "5075ab43-d632-4218-b973-a7c8a771baa9", 00:16:19.699 "is_configured": true, 00:16:19.699 "data_offset": 0, 00:16:19.699 "data_size": 65536 00:16:19.699 }, 00:16:19.699 { 00:16:19.699 "name": "BaseBdev2", 00:16:19.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.699 "is_configured": false, 00:16:19.699 "data_offset": 0, 00:16:19.699 "data_size": 0 00:16:19.699 }, 00:16:19.699 { 00:16:19.699 "name": "BaseBdev3", 00:16:19.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.699 "is_configured": false, 00:16:19.699 "data_offset": 0, 00:16:19.699 "data_size": 0 00:16:19.699 } 00:16:19.699 ] 00:16:19.699 }' 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.699 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.265 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.266 [2024-11-15 11:28:02.952126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.266 [2024-11-15 11:28:02.952241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.266 [2024-11-15 11:28:02.964156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.266 [2024-11-15 11:28:02.966904] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.266 [2024-11-15 11:28:02.966969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.266 [2024-11-15 11:28:02.966985] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.266 [2024-11-15 11:28:02.967001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.266 11:28:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.266 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.266 "name": "Existed_Raid", 00:16:20.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.266 "strip_size_kb": 64, 00:16:20.266 "state": "configuring", 00:16:20.266 "raid_level": "raid5f", 00:16:20.266 "superblock": false, 00:16:20.266 "num_base_bdevs": 3, 00:16:20.266 "num_base_bdevs_discovered": 1, 00:16:20.266 "num_base_bdevs_operational": 3, 00:16:20.266 "base_bdevs_list": [ 00:16:20.266 { 00:16:20.266 "name": "BaseBdev1", 00:16:20.266 "uuid": "5075ab43-d632-4218-b973-a7c8a771baa9", 00:16:20.266 "is_configured": true, 00:16:20.266 "data_offset": 0, 00:16:20.266 "data_size": 65536 00:16:20.266 }, 00:16:20.266 { 00:16:20.266 "name": "BaseBdev2", 00:16:20.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.266 "is_configured": false, 00:16:20.266 "data_offset": 0, 00:16:20.266 "data_size": 0 00:16:20.266 }, 00:16:20.266 { 00:16:20.266 "name": "BaseBdev3", 00:16:20.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.266 "is_configured": false, 00:16:20.266 "data_offset": 0, 00:16:20.266 "data_size": 0 00:16:20.266 } 00:16:20.266 ] 00:16:20.266 }' 00:16:20.266 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.266 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.874 [2024-11-15 11:28:03.533821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.874 BaseBdev2 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.874 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.874 [ 00:16:20.874 { 00:16:20.874 "name": "BaseBdev2", 00:16:20.874 "aliases": [ 00:16:20.875 "6a9dd155-799a-4466-8e89-3cefb07cf060" 00:16:20.875 ], 00:16:20.875 "product_name": "Malloc disk", 00:16:20.875 "block_size": 512, 00:16:20.875 "num_blocks": 65536, 00:16:20.875 "uuid": "6a9dd155-799a-4466-8e89-3cefb07cf060", 00:16:20.875 "assigned_rate_limits": { 00:16:20.875 "rw_ios_per_sec": 0, 00:16:20.875 "rw_mbytes_per_sec": 0, 00:16:20.875 "r_mbytes_per_sec": 0, 00:16:20.875 "w_mbytes_per_sec": 0 00:16:20.875 }, 00:16:20.875 "claimed": true, 00:16:20.875 "claim_type": "exclusive_write", 00:16:20.875 "zoned": false, 00:16:20.875 "supported_io_types": { 00:16:20.875 "read": true, 00:16:20.875 "write": true, 00:16:20.875 "unmap": true, 00:16:20.875 "flush": true, 00:16:20.875 "reset": true, 00:16:20.875 "nvme_admin": false, 00:16:20.875 "nvme_io": false, 00:16:20.875 "nvme_io_md": false, 00:16:20.875 "write_zeroes": true, 00:16:20.875 "zcopy": true, 00:16:20.875 "get_zone_info": false, 00:16:20.875 "zone_management": false, 00:16:20.875 "zone_append": false, 00:16:20.875 "compare": false, 00:16:20.875 "compare_and_write": false, 00:16:20.875 "abort": true, 00:16:20.875 "seek_hole": false, 00:16:20.875 "seek_data": false, 00:16:20.875 "copy": true, 00:16:20.875 "nvme_iov_md": false 00:16:20.875 }, 00:16:20.875 "memory_domains": [ 00:16:20.875 { 00:16:20.875 "dma_device_id": "system", 00:16:20.875 "dma_device_type": 1 00:16:20.875 }, 00:16:20.875 { 00:16:20.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.875 "dma_device_type": 2 00:16:20.875 } 00:16:20.875 ], 00:16:20.875 "driver_specific": {} 00:16:20.875 } 00:16:20.875 ] 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.875 "name": "Existed_Raid", 00:16:20.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.875 "strip_size_kb": 64, 00:16:20.875 "state": "configuring", 00:16:20.875 "raid_level": "raid5f", 00:16:20.875 "superblock": false, 00:16:20.875 "num_base_bdevs": 3, 00:16:20.875 "num_base_bdevs_discovered": 2, 00:16:20.875 "num_base_bdevs_operational": 3, 00:16:20.875 "base_bdevs_list": [ 00:16:20.875 { 00:16:20.875 "name": "BaseBdev1", 00:16:20.875 "uuid": "5075ab43-d632-4218-b973-a7c8a771baa9", 00:16:20.875 "is_configured": true, 00:16:20.875 "data_offset": 0, 00:16:20.875 "data_size": 65536 00:16:20.875 }, 00:16:20.875 { 00:16:20.875 "name": "BaseBdev2", 00:16:20.875 "uuid": "6a9dd155-799a-4466-8e89-3cefb07cf060", 00:16:20.875 "is_configured": true, 00:16:20.875 "data_offset": 0, 00:16:20.875 "data_size": 65536 00:16:20.875 }, 00:16:20.875 { 00:16:20.875 "name": "BaseBdev3", 00:16:20.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.875 "is_configured": false, 00:16:20.875 "data_offset": 0, 00:16:20.875 "data_size": 0 00:16:20.875 } 00:16:20.875 ] 00:16:20.875 }' 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.875 11:28:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.444 [2024-11-15 11:28:04.134534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.444 [2024-11-15 11:28:04.134726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:21.444 [2024-11-15 11:28:04.134752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:21.444 [2024-11-15 11:28:04.135151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:21.444 [2024-11-15 11:28:04.140407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:21.444 [2024-11-15 11:28:04.140437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:21.444 [2024-11-15 11:28:04.140906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.444 BaseBdev3 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.444 [ 00:16:21.444 { 00:16:21.444 "name": "BaseBdev3", 00:16:21.444 "aliases": [ 00:16:21.444 "a0cdcf05-4aeb-43a1-a0b7-35f60f1d1365" 00:16:21.444 ], 00:16:21.444 "product_name": "Malloc disk", 00:16:21.444 "block_size": 512, 00:16:21.444 "num_blocks": 65536, 00:16:21.444 "uuid": "a0cdcf05-4aeb-43a1-a0b7-35f60f1d1365", 00:16:21.444 "assigned_rate_limits": { 00:16:21.444 "rw_ios_per_sec": 0, 00:16:21.444 "rw_mbytes_per_sec": 0, 00:16:21.444 "r_mbytes_per_sec": 0, 00:16:21.444 "w_mbytes_per_sec": 0 00:16:21.444 }, 00:16:21.444 "claimed": true, 00:16:21.444 "claim_type": "exclusive_write", 00:16:21.444 "zoned": false, 00:16:21.444 "supported_io_types": { 00:16:21.444 "read": true, 00:16:21.444 "write": true, 00:16:21.444 "unmap": true, 00:16:21.444 "flush": true, 00:16:21.444 "reset": true, 00:16:21.444 "nvme_admin": false, 00:16:21.444 "nvme_io": false, 00:16:21.444 "nvme_io_md": false, 00:16:21.444 "write_zeroes": true, 00:16:21.444 "zcopy": true, 00:16:21.444 "get_zone_info": false, 00:16:21.444 "zone_management": false, 00:16:21.444 "zone_append": false, 00:16:21.444 "compare": false, 00:16:21.444 "compare_and_write": false, 00:16:21.444 "abort": true, 00:16:21.444 "seek_hole": false, 00:16:21.444 "seek_data": false, 00:16:21.444 "copy": true, 00:16:21.444 "nvme_iov_md": false 00:16:21.444 }, 00:16:21.444 "memory_domains": [ 00:16:21.444 { 00:16:21.444 "dma_device_id": "system", 00:16:21.444 "dma_device_type": 1 00:16:21.444 }, 00:16:21.444 { 00:16:21.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.444 "dma_device_type": 2 00:16:21.444 } 00:16:21.444 ], 00:16:21.444 "driver_specific": {} 00:16:21.444 } 00:16:21.444 ] 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.444 "name": "Existed_Raid", 00:16:21.444 "uuid": "f897f098-925b-40cc-a7c3-b875002ff233", 00:16:21.444 "strip_size_kb": 64, 00:16:21.444 "state": "online", 00:16:21.444 "raid_level": "raid5f", 00:16:21.444 "superblock": false, 00:16:21.444 "num_base_bdevs": 3, 00:16:21.444 "num_base_bdevs_discovered": 3, 00:16:21.444 "num_base_bdevs_operational": 3, 00:16:21.444 "base_bdevs_list": [ 00:16:21.444 { 00:16:21.444 "name": "BaseBdev1", 00:16:21.444 "uuid": "5075ab43-d632-4218-b973-a7c8a771baa9", 00:16:21.444 "is_configured": true, 00:16:21.444 "data_offset": 0, 00:16:21.444 "data_size": 65536 00:16:21.444 }, 00:16:21.444 { 00:16:21.444 "name": "BaseBdev2", 00:16:21.444 "uuid": "6a9dd155-799a-4466-8e89-3cefb07cf060", 00:16:21.444 "is_configured": true, 00:16:21.444 "data_offset": 0, 00:16:21.444 "data_size": 65536 00:16:21.444 }, 00:16:21.444 { 00:16:21.444 "name": "BaseBdev3", 00:16:21.444 "uuid": "a0cdcf05-4aeb-43a1-a0b7-35f60f1d1365", 00:16:21.444 "is_configured": true, 00:16:21.444 "data_offset": 0, 00:16:21.444 "data_size": 65536 00:16:21.444 } 00:16:21.444 ] 00:16:21.444 }' 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.444 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.012 [2024-11-15 11:28:04.715891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.012 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:22.012 "name": "Existed_Raid", 00:16:22.012 "aliases": [ 00:16:22.012 "f897f098-925b-40cc-a7c3-b875002ff233" 00:16:22.012 ], 00:16:22.012 "product_name": "Raid Volume", 00:16:22.012 "block_size": 512, 00:16:22.012 "num_blocks": 131072, 00:16:22.012 "uuid": "f897f098-925b-40cc-a7c3-b875002ff233", 00:16:22.012 "assigned_rate_limits": { 00:16:22.012 "rw_ios_per_sec": 0, 00:16:22.012 "rw_mbytes_per_sec": 0, 00:16:22.012 "r_mbytes_per_sec": 0, 00:16:22.012 "w_mbytes_per_sec": 0 00:16:22.012 }, 00:16:22.012 "claimed": false, 00:16:22.012 "zoned": false, 00:16:22.012 "supported_io_types": { 00:16:22.012 "read": true, 00:16:22.012 "write": true, 00:16:22.012 "unmap": false, 00:16:22.012 "flush": false, 00:16:22.012 "reset": true, 00:16:22.012 "nvme_admin": false, 00:16:22.012 "nvme_io": false, 00:16:22.012 "nvme_io_md": false, 00:16:22.012 "write_zeroes": true, 00:16:22.012 "zcopy": false, 00:16:22.012 "get_zone_info": false, 00:16:22.012 "zone_management": false, 00:16:22.012 "zone_append": false, 00:16:22.012 "compare": false, 00:16:22.012 "compare_and_write": false, 00:16:22.012 "abort": false, 00:16:22.012 "seek_hole": false, 00:16:22.012 "seek_data": false, 00:16:22.012 "copy": false, 00:16:22.012 "nvme_iov_md": false 00:16:22.012 }, 00:16:22.012 "driver_specific": { 00:16:22.012 "raid": { 00:16:22.012 "uuid": "f897f098-925b-40cc-a7c3-b875002ff233", 00:16:22.012 "strip_size_kb": 64, 00:16:22.012 "state": "online", 00:16:22.012 "raid_level": "raid5f", 00:16:22.012 "superblock": false, 00:16:22.012 "num_base_bdevs": 3, 00:16:22.012 "num_base_bdevs_discovered": 3, 00:16:22.012 "num_base_bdevs_operational": 3, 00:16:22.012 "base_bdevs_list": [ 00:16:22.012 { 00:16:22.012 "name": "BaseBdev1", 00:16:22.012 "uuid": "5075ab43-d632-4218-b973-a7c8a771baa9", 00:16:22.012 "is_configured": true, 00:16:22.012 "data_offset": 0, 00:16:22.012 "data_size": 65536 00:16:22.012 }, 00:16:22.012 { 00:16:22.012 "name": "BaseBdev2", 00:16:22.012 "uuid": "6a9dd155-799a-4466-8e89-3cefb07cf060", 00:16:22.012 "is_configured": true, 00:16:22.012 "data_offset": 0, 00:16:22.012 "data_size": 65536 00:16:22.012 }, 00:16:22.012 { 00:16:22.012 "name": "BaseBdev3", 00:16:22.012 "uuid": "a0cdcf05-4aeb-43a1-a0b7-35f60f1d1365", 00:16:22.012 "is_configured": true, 00:16:22.012 "data_offset": 0, 00:16:22.012 "data_size": 65536 00:16:22.012 } 00:16:22.012 ] 00:16:22.012 } 00:16:22.013 } 00:16:22.013 }' 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:22.013 BaseBdev2 00:16:22.013 BaseBdev3' 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.013 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.274 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.274 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.274 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.274 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:22.274 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.274 11:28:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.274 11:28:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.274 [2024-11-15 11:28:05.047702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.274 "name": "Existed_Raid", 00:16:22.274 "uuid": "f897f098-925b-40cc-a7c3-b875002ff233", 00:16:22.274 "strip_size_kb": 64, 00:16:22.274 "state": "online", 00:16:22.274 "raid_level": "raid5f", 00:16:22.274 "superblock": false, 00:16:22.274 "num_base_bdevs": 3, 00:16:22.274 "num_base_bdevs_discovered": 2, 00:16:22.274 "num_base_bdevs_operational": 2, 00:16:22.274 "base_bdevs_list": [ 00:16:22.274 { 00:16:22.274 "name": null, 00:16:22.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.274 "is_configured": false, 00:16:22.274 "data_offset": 0, 00:16:22.274 "data_size": 65536 00:16:22.274 }, 00:16:22.274 { 00:16:22.274 "name": "BaseBdev2", 00:16:22.274 "uuid": "6a9dd155-799a-4466-8e89-3cefb07cf060", 00:16:22.274 "is_configured": true, 00:16:22.274 "data_offset": 0, 00:16:22.274 "data_size": 65536 00:16:22.274 }, 00:16:22.274 { 00:16:22.274 "name": "BaseBdev3", 00:16:22.274 "uuid": "a0cdcf05-4aeb-43a1-a0b7-35f60f1d1365", 00:16:22.274 "is_configured": true, 00:16:22.274 "data_offset": 0, 00:16:22.274 "data_size": 65536 00:16:22.274 } 00:16:22.274 ] 00:16:22.274 }' 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.274 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.841 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.841 [2024-11-15 11:28:05.716428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.841 [2024-11-15 11:28:05.716641] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.100 [2024-11-15 11:28:05.805284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.100 [2024-11-15 11:28:05.873385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:23.100 [2024-11-15 11:28:05.873480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:23.100 11:28:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.100 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:23.100 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:23.100 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:23.100 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:23.100 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.100 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.100 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.100 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.359 BaseBdev2 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:23.359 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.360 [ 00:16:23.360 { 00:16:23.360 "name": "BaseBdev2", 00:16:23.360 "aliases": [ 00:16:23.360 "1b55f1b6-1858-4bfe-9231-c2866e85af8f" 00:16:23.360 ], 00:16:23.360 "product_name": "Malloc disk", 00:16:23.360 "block_size": 512, 00:16:23.360 "num_blocks": 65536, 00:16:23.360 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:23.360 "assigned_rate_limits": { 00:16:23.360 "rw_ios_per_sec": 0, 00:16:23.360 "rw_mbytes_per_sec": 0, 00:16:23.360 "r_mbytes_per_sec": 0, 00:16:23.360 "w_mbytes_per_sec": 0 00:16:23.360 }, 00:16:23.360 "claimed": false, 00:16:23.360 "zoned": false, 00:16:23.360 "supported_io_types": { 00:16:23.360 "read": true, 00:16:23.360 "write": true, 00:16:23.360 "unmap": true, 00:16:23.360 "flush": true, 00:16:23.360 "reset": true, 00:16:23.360 "nvme_admin": false, 00:16:23.360 "nvme_io": false, 00:16:23.360 "nvme_io_md": false, 00:16:23.360 "write_zeroes": true, 00:16:23.360 "zcopy": true, 00:16:23.360 "get_zone_info": false, 00:16:23.360 "zone_management": false, 00:16:23.360 "zone_append": false, 00:16:23.360 "compare": false, 00:16:23.360 "compare_and_write": false, 00:16:23.360 "abort": true, 00:16:23.360 "seek_hole": false, 00:16:23.360 "seek_data": false, 00:16:23.360 "copy": true, 00:16:23.360 "nvme_iov_md": false 00:16:23.360 }, 00:16:23.360 "memory_domains": [ 00:16:23.360 { 00:16:23.360 "dma_device_id": "system", 00:16:23.360 "dma_device_type": 1 00:16:23.360 }, 00:16:23.360 { 00:16:23.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.360 "dma_device_type": 2 00:16:23.360 } 00:16:23.360 ], 00:16:23.360 "driver_specific": {} 00:16:23.360 } 00:16:23.360 ] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.360 BaseBdev3 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.360 [ 00:16:23.360 { 00:16:23.360 "name": "BaseBdev3", 00:16:23.360 "aliases": [ 00:16:23.360 "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb" 00:16:23.360 ], 00:16:23.360 "product_name": "Malloc disk", 00:16:23.360 "block_size": 512, 00:16:23.360 "num_blocks": 65536, 00:16:23.360 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:23.360 "assigned_rate_limits": { 00:16:23.360 "rw_ios_per_sec": 0, 00:16:23.360 "rw_mbytes_per_sec": 0, 00:16:23.360 "r_mbytes_per_sec": 0, 00:16:23.360 "w_mbytes_per_sec": 0 00:16:23.360 }, 00:16:23.360 "claimed": false, 00:16:23.360 "zoned": false, 00:16:23.360 "supported_io_types": { 00:16:23.360 "read": true, 00:16:23.360 "write": true, 00:16:23.360 "unmap": true, 00:16:23.360 "flush": true, 00:16:23.360 "reset": true, 00:16:23.360 "nvme_admin": false, 00:16:23.360 "nvme_io": false, 00:16:23.360 "nvme_io_md": false, 00:16:23.360 "write_zeroes": true, 00:16:23.360 "zcopy": true, 00:16:23.360 "get_zone_info": false, 00:16:23.360 "zone_management": false, 00:16:23.360 "zone_append": false, 00:16:23.360 "compare": false, 00:16:23.360 "compare_and_write": false, 00:16:23.360 "abort": true, 00:16:23.360 "seek_hole": false, 00:16:23.360 "seek_data": false, 00:16:23.360 "copy": true, 00:16:23.360 "nvme_iov_md": false 00:16:23.360 }, 00:16:23.360 "memory_domains": [ 00:16:23.360 { 00:16:23.360 "dma_device_id": "system", 00:16:23.360 "dma_device_type": 1 00:16:23.360 }, 00:16:23.360 { 00:16:23.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.360 "dma_device_type": 2 00:16:23.360 } 00:16:23.360 ], 00:16:23.360 "driver_specific": {} 00:16:23.360 } 00:16:23.360 ] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.360 [2024-11-15 11:28:06.180941] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.360 [2024-11-15 11:28:06.181007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.360 [2024-11-15 11:28:06.181039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.360 [2024-11-15 11:28:06.183769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.360 "name": "Existed_Raid", 00:16:23.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.360 "strip_size_kb": 64, 00:16:23.360 "state": "configuring", 00:16:23.360 "raid_level": "raid5f", 00:16:23.360 "superblock": false, 00:16:23.360 "num_base_bdevs": 3, 00:16:23.360 "num_base_bdevs_discovered": 2, 00:16:23.360 "num_base_bdevs_operational": 3, 00:16:23.360 "base_bdevs_list": [ 00:16:23.360 { 00:16:23.360 "name": "BaseBdev1", 00:16:23.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.360 "is_configured": false, 00:16:23.360 "data_offset": 0, 00:16:23.360 "data_size": 0 00:16:23.360 }, 00:16:23.360 { 00:16:23.360 "name": "BaseBdev2", 00:16:23.360 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:23.360 "is_configured": true, 00:16:23.360 "data_offset": 0, 00:16:23.360 "data_size": 65536 00:16:23.360 }, 00:16:23.360 { 00:16:23.360 "name": "BaseBdev3", 00:16:23.360 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:23.360 "is_configured": true, 00:16:23.360 "data_offset": 0, 00:16:23.360 "data_size": 65536 00:16:23.360 } 00:16:23.360 ] 00:16:23.360 }' 00:16:23.360 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.361 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.928 [2024-11-15 11:28:06.717224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.928 "name": "Existed_Raid", 00:16:23.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.928 "strip_size_kb": 64, 00:16:23.928 "state": "configuring", 00:16:23.928 "raid_level": "raid5f", 00:16:23.928 "superblock": false, 00:16:23.928 "num_base_bdevs": 3, 00:16:23.928 "num_base_bdevs_discovered": 1, 00:16:23.928 "num_base_bdevs_operational": 3, 00:16:23.928 "base_bdevs_list": [ 00:16:23.928 { 00:16:23.928 "name": "BaseBdev1", 00:16:23.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.928 "is_configured": false, 00:16:23.928 "data_offset": 0, 00:16:23.928 "data_size": 0 00:16:23.928 }, 00:16:23.928 { 00:16:23.928 "name": null, 00:16:23.928 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:23.928 "is_configured": false, 00:16:23.928 "data_offset": 0, 00:16:23.928 "data_size": 65536 00:16:23.928 }, 00:16:23.928 { 00:16:23.928 "name": "BaseBdev3", 00:16:23.928 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:23.928 "is_configured": true, 00:16:23.928 "data_offset": 0, 00:16:23.928 "data_size": 65536 00:16:23.928 } 00:16:23.928 ] 00:16:23.928 }' 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.928 11:28:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.497 [2024-11-15 11:28:07.340945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.497 BaseBdev1 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.497 [ 00:16:24.497 { 00:16:24.497 "name": "BaseBdev1", 00:16:24.497 "aliases": [ 00:16:24.497 "9e61fee7-89e7-4cb4-8461-612c589aa6dd" 00:16:24.497 ], 00:16:24.497 "product_name": "Malloc disk", 00:16:24.497 "block_size": 512, 00:16:24.497 "num_blocks": 65536, 00:16:24.497 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:24.497 "assigned_rate_limits": { 00:16:24.497 "rw_ios_per_sec": 0, 00:16:24.497 "rw_mbytes_per_sec": 0, 00:16:24.497 "r_mbytes_per_sec": 0, 00:16:24.497 "w_mbytes_per_sec": 0 00:16:24.497 }, 00:16:24.497 "claimed": true, 00:16:24.497 "claim_type": "exclusive_write", 00:16:24.497 "zoned": false, 00:16:24.497 "supported_io_types": { 00:16:24.497 "read": true, 00:16:24.497 "write": true, 00:16:24.497 "unmap": true, 00:16:24.497 "flush": true, 00:16:24.497 "reset": true, 00:16:24.497 "nvme_admin": false, 00:16:24.497 "nvme_io": false, 00:16:24.497 "nvme_io_md": false, 00:16:24.497 "write_zeroes": true, 00:16:24.497 "zcopy": true, 00:16:24.497 "get_zone_info": false, 00:16:24.497 "zone_management": false, 00:16:24.497 "zone_append": false, 00:16:24.497 "compare": false, 00:16:24.497 "compare_and_write": false, 00:16:24.497 "abort": true, 00:16:24.497 "seek_hole": false, 00:16:24.497 "seek_data": false, 00:16:24.497 "copy": true, 00:16:24.497 "nvme_iov_md": false 00:16:24.497 }, 00:16:24.497 "memory_domains": [ 00:16:24.497 { 00:16:24.497 "dma_device_id": "system", 00:16:24.497 "dma_device_type": 1 00:16:24.497 }, 00:16:24.497 { 00:16:24.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.497 "dma_device_type": 2 00:16:24.497 } 00:16:24.497 ], 00:16:24.497 "driver_specific": {} 00:16:24.497 } 00:16:24.497 ] 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.497 "name": "Existed_Raid", 00:16:24.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.497 "strip_size_kb": 64, 00:16:24.497 "state": "configuring", 00:16:24.497 "raid_level": "raid5f", 00:16:24.497 "superblock": false, 00:16:24.497 "num_base_bdevs": 3, 00:16:24.497 "num_base_bdevs_discovered": 2, 00:16:24.497 "num_base_bdevs_operational": 3, 00:16:24.497 "base_bdevs_list": [ 00:16:24.497 { 00:16:24.497 "name": "BaseBdev1", 00:16:24.497 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:24.497 "is_configured": true, 00:16:24.497 "data_offset": 0, 00:16:24.497 "data_size": 65536 00:16:24.497 }, 00:16:24.497 { 00:16:24.497 "name": null, 00:16:24.497 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:24.497 "is_configured": false, 00:16:24.497 "data_offset": 0, 00:16:24.497 "data_size": 65536 00:16:24.497 }, 00:16:24.497 { 00:16:24.497 "name": "BaseBdev3", 00:16:24.497 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:24.497 "is_configured": true, 00:16:24.497 "data_offset": 0, 00:16:24.497 "data_size": 65536 00:16:24.497 } 00:16:24.497 ] 00:16:24.497 }' 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.497 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.066 [2024-11-15 11:28:07.937136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.066 "name": "Existed_Raid", 00:16:25.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.066 "strip_size_kb": 64, 00:16:25.066 "state": "configuring", 00:16:25.066 "raid_level": "raid5f", 00:16:25.066 "superblock": false, 00:16:25.066 "num_base_bdevs": 3, 00:16:25.066 "num_base_bdevs_discovered": 1, 00:16:25.066 "num_base_bdevs_operational": 3, 00:16:25.066 "base_bdevs_list": [ 00:16:25.066 { 00:16:25.066 "name": "BaseBdev1", 00:16:25.066 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:25.066 "is_configured": true, 00:16:25.066 "data_offset": 0, 00:16:25.066 "data_size": 65536 00:16:25.066 }, 00:16:25.066 { 00:16:25.066 "name": null, 00:16:25.066 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:25.066 "is_configured": false, 00:16:25.066 "data_offset": 0, 00:16:25.066 "data_size": 65536 00:16:25.066 }, 00:16:25.066 { 00:16:25.066 "name": null, 00:16:25.066 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:25.066 "is_configured": false, 00:16:25.066 "data_offset": 0, 00:16:25.066 "data_size": 65536 00:16:25.066 } 00:16:25.066 ] 00:16:25.066 }' 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.066 11:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.633 [2024-11-15 11:28:08.541389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.633 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.892 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.892 "name": "Existed_Raid", 00:16:25.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.892 "strip_size_kb": 64, 00:16:25.892 "state": "configuring", 00:16:25.892 "raid_level": "raid5f", 00:16:25.892 "superblock": false, 00:16:25.892 "num_base_bdevs": 3, 00:16:25.892 "num_base_bdevs_discovered": 2, 00:16:25.892 "num_base_bdevs_operational": 3, 00:16:25.892 "base_bdevs_list": [ 00:16:25.892 { 00:16:25.892 "name": "BaseBdev1", 00:16:25.892 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:25.892 "is_configured": true, 00:16:25.892 "data_offset": 0, 00:16:25.892 "data_size": 65536 00:16:25.892 }, 00:16:25.892 { 00:16:25.892 "name": null, 00:16:25.892 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:25.892 "is_configured": false, 00:16:25.892 "data_offset": 0, 00:16:25.892 "data_size": 65536 00:16:25.892 }, 00:16:25.892 { 00:16:25.892 "name": "BaseBdev3", 00:16:25.892 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:25.892 "is_configured": true, 00:16:25.892 "data_offset": 0, 00:16:25.892 "data_size": 65536 00:16:25.892 } 00:16:25.892 ] 00:16:25.892 }' 00:16:25.892 11:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.892 11:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.150 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.150 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.150 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.150 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:26.150 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.409 [2024-11-15 11:28:09.113562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.409 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.410 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.410 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.410 "name": "Existed_Raid", 00:16:26.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.410 "strip_size_kb": 64, 00:16:26.410 "state": "configuring", 00:16:26.410 "raid_level": "raid5f", 00:16:26.410 "superblock": false, 00:16:26.410 "num_base_bdevs": 3, 00:16:26.410 "num_base_bdevs_discovered": 1, 00:16:26.410 "num_base_bdevs_operational": 3, 00:16:26.410 "base_bdevs_list": [ 00:16:26.410 { 00:16:26.410 "name": null, 00:16:26.410 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:26.410 "is_configured": false, 00:16:26.410 "data_offset": 0, 00:16:26.410 "data_size": 65536 00:16:26.410 }, 00:16:26.410 { 00:16:26.410 "name": null, 00:16:26.410 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:26.410 "is_configured": false, 00:16:26.410 "data_offset": 0, 00:16:26.410 "data_size": 65536 00:16:26.410 }, 00:16:26.410 { 00:16:26.410 "name": "BaseBdev3", 00:16:26.410 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:26.410 "is_configured": true, 00:16:26.410 "data_offset": 0, 00:16:26.410 "data_size": 65536 00:16:26.410 } 00:16:26.410 ] 00:16:26.410 }' 00:16:26.410 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.410 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 [2024-11-15 11:28:09.785972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.977 "name": "Existed_Raid", 00:16:26.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.977 "strip_size_kb": 64, 00:16:26.977 "state": "configuring", 00:16:26.977 "raid_level": "raid5f", 00:16:26.977 "superblock": false, 00:16:26.977 "num_base_bdevs": 3, 00:16:26.977 "num_base_bdevs_discovered": 2, 00:16:26.977 "num_base_bdevs_operational": 3, 00:16:26.977 "base_bdevs_list": [ 00:16:26.977 { 00:16:26.977 "name": null, 00:16:26.977 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:26.977 "is_configured": false, 00:16:26.977 "data_offset": 0, 00:16:26.977 "data_size": 65536 00:16:26.977 }, 00:16:26.977 { 00:16:26.977 "name": "BaseBdev2", 00:16:26.977 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:26.977 "is_configured": true, 00:16:26.977 "data_offset": 0, 00:16:26.977 "data_size": 65536 00:16:26.977 }, 00:16:26.977 { 00:16:26.977 "name": "BaseBdev3", 00:16:26.977 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:26.977 "is_configured": true, 00:16:26.977 "data_offset": 0, 00:16:26.977 "data_size": 65536 00:16:26.977 } 00:16:26.977 ] 00:16:26.977 }' 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.977 11:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9e61fee7-89e7-4cb4-8461-612c589aa6dd 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.547 [2024-11-15 11:28:10.480139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:27.547 [2024-11-15 11:28:10.480264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:27.547 [2024-11-15 11:28:10.480282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:27.547 [2024-11-15 11:28:10.480600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:27.547 [2024-11-15 11:28:10.484907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:27.547 [2024-11-15 11:28:10.484932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:27.547 [2024-11-15 11:28:10.485343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.547 NewBaseBdev 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.547 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.806 [ 00:16:27.806 { 00:16:27.806 "name": "NewBaseBdev", 00:16:27.806 "aliases": [ 00:16:27.806 "9e61fee7-89e7-4cb4-8461-612c589aa6dd" 00:16:27.806 ], 00:16:27.806 "product_name": "Malloc disk", 00:16:27.806 "block_size": 512, 00:16:27.806 "num_blocks": 65536, 00:16:27.806 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:27.806 "assigned_rate_limits": { 00:16:27.806 "rw_ios_per_sec": 0, 00:16:27.806 "rw_mbytes_per_sec": 0, 00:16:27.806 "r_mbytes_per_sec": 0, 00:16:27.806 "w_mbytes_per_sec": 0 00:16:27.806 }, 00:16:27.806 "claimed": true, 00:16:27.806 "claim_type": "exclusive_write", 00:16:27.806 "zoned": false, 00:16:27.806 "supported_io_types": { 00:16:27.806 "read": true, 00:16:27.806 "write": true, 00:16:27.806 "unmap": true, 00:16:27.806 "flush": true, 00:16:27.806 "reset": true, 00:16:27.806 "nvme_admin": false, 00:16:27.806 "nvme_io": false, 00:16:27.806 "nvme_io_md": false, 00:16:27.806 "write_zeroes": true, 00:16:27.806 "zcopy": true, 00:16:27.806 "get_zone_info": false, 00:16:27.806 "zone_management": false, 00:16:27.806 "zone_append": false, 00:16:27.806 "compare": false, 00:16:27.806 "compare_and_write": false, 00:16:27.806 "abort": true, 00:16:27.806 "seek_hole": false, 00:16:27.806 "seek_data": false, 00:16:27.806 "copy": true, 00:16:27.806 "nvme_iov_md": false 00:16:27.806 }, 00:16:27.806 "memory_domains": [ 00:16:27.806 { 00:16:27.806 "dma_device_id": "system", 00:16:27.806 "dma_device_type": 1 00:16:27.806 }, 00:16:27.806 { 00:16:27.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.806 "dma_device_type": 2 00:16:27.806 } 00:16:27.806 ], 00:16:27.806 "driver_specific": {} 00:16:27.806 } 00:16:27.806 ] 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.806 "name": "Existed_Raid", 00:16:27.806 "uuid": "d0a86923-9850-4603-876e-a259c0734c81", 00:16:27.806 "strip_size_kb": 64, 00:16:27.806 "state": "online", 00:16:27.806 "raid_level": "raid5f", 00:16:27.806 "superblock": false, 00:16:27.806 "num_base_bdevs": 3, 00:16:27.806 "num_base_bdevs_discovered": 3, 00:16:27.806 "num_base_bdevs_operational": 3, 00:16:27.806 "base_bdevs_list": [ 00:16:27.806 { 00:16:27.806 "name": "NewBaseBdev", 00:16:27.806 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:27.806 "is_configured": true, 00:16:27.806 "data_offset": 0, 00:16:27.806 "data_size": 65536 00:16:27.806 }, 00:16:27.806 { 00:16:27.806 "name": "BaseBdev2", 00:16:27.806 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:27.806 "is_configured": true, 00:16:27.806 "data_offset": 0, 00:16:27.806 "data_size": 65536 00:16:27.806 }, 00:16:27.806 { 00:16:27.806 "name": "BaseBdev3", 00:16:27.806 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:27.806 "is_configured": true, 00:16:27.806 "data_offset": 0, 00:16:27.806 "data_size": 65536 00:16:27.806 } 00:16:27.806 ] 00:16:27.806 }' 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.806 11:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.374 [2024-11-15 11:28:11.048109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:28.374 "name": "Existed_Raid", 00:16:28.374 "aliases": [ 00:16:28.374 "d0a86923-9850-4603-876e-a259c0734c81" 00:16:28.374 ], 00:16:28.374 "product_name": "Raid Volume", 00:16:28.374 "block_size": 512, 00:16:28.374 "num_blocks": 131072, 00:16:28.374 "uuid": "d0a86923-9850-4603-876e-a259c0734c81", 00:16:28.374 "assigned_rate_limits": { 00:16:28.374 "rw_ios_per_sec": 0, 00:16:28.374 "rw_mbytes_per_sec": 0, 00:16:28.374 "r_mbytes_per_sec": 0, 00:16:28.374 "w_mbytes_per_sec": 0 00:16:28.374 }, 00:16:28.374 "claimed": false, 00:16:28.374 "zoned": false, 00:16:28.374 "supported_io_types": { 00:16:28.374 "read": true, 00:16:28.374 "write": true, 00:16:28.374 "unmap": false, 00:16:28.374 "flush": false, 00:16:28.374 "reset": true, 00:16:28.374 "nvme_admin": false, 00:16:28.374 "nvme_io": false, 00:16:28.374 "nvme_io_md": false, 00:16:28.374 "write_zeroes": true, 00:16:28.374 "zcopy": false, 00:16:28.374 "get_zone_info": false, 00:16:28.374 "zone_management": false, 00:16:28.374 "zone_append": false, 00:16:28.374 "compare": false, 00:16:28.374 "compare_and_write": false, 00:16:28.374 "abort": false, 00:16:28.374 "seek_hole": false, 00:16:28.374 "seek_data": false, 00:16:28.374 "copy": false, 00:16:28.374 "nvme_iov_md": false 00:16:28.374 }, 00:16:28.374 "driver_specific": { 00:16:28.374 "raid": { 00:16:28.374 "uuid": "d0a86923-9850-4603-876e-a259c0734c81", 00:16:28.374 "strip_size_kb": 64, 00:16:28.374 "state": "online", 00:16:28.374 "raid_level": "raid5f", 00:16:28.374 "superblock": false, 00:16:28.374 "num_base_bdevs": 3, 00:16:28.374 "num_base_bdevs_discovered": 3, 00:16:28.374 "num_base_bdevs_operational": 3, 00:16:28.374 "base_bdevs_list": [ 00:16:28.374 { 00:16:28.374 "name": "NewBaseBdev", 00:16:28.374 "uuid": "9e61fee7-89e7-4cb4-8461-612c589aa6dd", 00:16:28.374 "is_configured": true, 00:16:28.374 "data_offset": 0, 00:16:28.374 "data_size": 65536 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "name": "BaseBdev2", 00:16:28.374 "uuid": "1b55f1b6-1858-4bfe-9231-c2866e85af8f", 00:16:28.374 "is_configured": true, 00:16:28.374 "data_offset": 0, 00:16:28.374 "data_size": 65536 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "name": "BaseBdev3", 00:16:28.374 "uuid": "a67eee91-a3e5-45a1-8dc4-b2cc1d1040eb", 00:16:28.374 "is_configured": true, 00:16:28.374 "data_offset": 0, 00:16:28.374 "data_size": 65536 00:16:28.374 } 00:16:28.374 ] 00:16:28.374 } 00:16:28.374 } 00:16:28.374 }' 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:28.374 BaseBdev2 00:16:28.374 BaseBdev3' 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.374 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.375 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.634 [2024-11-15 11:28:11.371940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.634 [2024-11-15 11:28:11.371992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.634 [2024-11-15 11:28:11.372098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.634 [2024-11-15 11:28:11.372536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.634 [2024-11-15 11:28:11.372586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80099 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80099 ']' 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80099 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80099 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:28.634 killing process with pid 80099 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80099' 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80099 00:16:28.634 [2024-11-15 11:28:11.414274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.634 11:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80099 00:16:28.897 [2024-11-15 11:28:11.678184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.843 11:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:29.843 00:16:29.843 real 0m12.002s 00:16:29.843 user 0m19.844s 00:16:29.843 sys 0m1.826s 00:16:29.843 11:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:29.843 11:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.843 ************************************ 00:16:29.843 END TEST raid5f_state_function_test 00:16:29.843 ************************************ 00:16:29.843 11:28:12 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:29.843 11:28:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:29.843 11:28:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:29.843 11:28:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.843 ************************************ 00:16:29.843 START TEST raid5f_state_function_test_sb 00:16:29.843 ************************************ 00:16:29.843 11:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:16:29.843 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:29.843 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:29.843 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:29.843 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80733 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:30.102 Process raid pid: 80733 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80733' 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80733 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80733 ']' 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:30.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:30.102 11:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.102 [2024-11-15 11:28:12.911143] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:16:30.102 [2024-11-15 11:28:12.911354] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.361 [2024-11-15 11:28:13.101035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.361 [2024-11-15 11:28:13.242930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.619 [2024-11-15 11:28:13.455249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.619 [2024-11-15 11:28:13.455329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.187 [2024-11-15 11:28:13.930978] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.187 [2024-11-15 11:28:13.931059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.187 [2024-11-15 11:28:13.931076] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.187 [2024-11-15 11:28:13.931092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.187 [2024-11-15 11:28:13.931103] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.187 [2024-11-15 11:28:13.931125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.187 "name": "Existed_Raid", 00:16:31.187 "uuid": "aec77ce2-723d-4168-9bfc-48b02067a9c9", 00:16:31.187 "strip_size_kb": 64, 00:16:31.187 "state": "configuring", 00:16:31.187 "raid_level": "raid5f", 00:16:31.187 "superblock": true, 00:16:31.187 "num_base_bdevs": 3, 00:16:31.187 "num_base_bdevs_discovered": 0, 00:16:31.187 "num_base_bdevs_operational": 3, 00:16:31.187 "base_bdevs_list": [ 00:16:31.187 { 00:16:31.187 "name": "BaseBdev1", 00:16:31.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.187 "is_configured": false, 00:16:31.187 "data_offset": 0, 00:16:31.187 "data_size": 0 00:16:31.187 }, 00:16:31.187 { 00:16:31.187 "name": "BaseBdev2", 00:16:31.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.187 "is_configured": false, 00:16:31.187 "data_offset": 0, 00:16:31.187 "data_size": 0 00:16:31.187 }, 00:16:31.187 { 00:16:31.187 "name": "BaseBdev3", 00:16:31.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.187 "is_configured": false, 00:16:31.187 "data_offset": 0, 00:16:31.187 "data_size": 0 00:16:31.187 } 00:16:31.187 ] 00:16:31.187 }' 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.187 11:28:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.756 [2024-11-15 11:28:14.451068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.756 [2024-11-15 11:28:14.451133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.756 [2024-11-15 11:28:14.459036] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.756 [2024-11-15 11:28:14.459100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.756 [2024-11-15 11:28:14.459115] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.756 [2024-11-15 11:28:14.459130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.756 [2024-11-15 11:28:14.459139] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.756 [2024-11-15 11:28:14.459153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.756 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.756 [2024-11-15 11:28:14.503284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.756 BaseBdev1 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.757 [ 00:16:31.757 { 00:16:31.757 "name": "BaseBdev1", 00:16:31.757 "aliases": [ 00:16:31.757 "e7d8696c-55cd-4445-8a50-35e71db4c15f" 00:16:31.757 ], 00:16:31.757 "product_name": "Malloc disk", 00:16:31.757 "block_size": 512, 00:16:31.757 "num_blocks": 65536, 00:16:31.757 "uuid": "e7d8696c-55cd-4445-8a50-35e71db4c15f", 00:16:31.757 "assigned_rate_limits": { 00:16:31.757 "rw_ios_per_sec": 0, 00:16:31.757 "rw_mbytes_per_sec": 0, 00:16:31.757 "r_mbytes_per_sec": 0, 00:16:31.757 "w_mbytes_per_sec": 0 00:16:31.757 }, 00:16:31.757 "claimed": true, 00:16:31.757 "claim_type": "exclusive_write", 00:16:31.757 "zoned": false, 00:16:31.757 "supported_io_types": { 00:16:31.757 "read": true, 00:16:31.757 "write": true, 00:16:31.757 "unmap": true, 00:16:31.757 "flush": true, 00:16:31.757 "reset": true, 00:16:31.757 "nvme_admin": false, 00:16:31.757 "nvme_io": false, 00:16:31.757 "nvme_io_md": false, 00:16:31.757 "write_zeroes": true, 00:16:31.757 "zcopy": true, 00:16:31.757 "get_zone_info": false, 00:16:31.757 "zone_management": false, 00:16:31.757 "zone_append": false, 00:16:31.757 "compare": false, 00:16:31.757 "compare_and_write": false, 00:16:31.757 "abort": true, 00:16:31.757 "seek_hole": false, 00:16:31.757 "seek_data": false, 00:16:31.757 "copy": true, 00:16:31.757 "nvme_iov_md": false 00:16:31.757 }, 00:16:31.757 "memory_domains": [ 00:16:31.757 { 00:16:31.757 "dma_device_id": "system", 00:16:31.757 "dma_device_type": 1 00:16:31.757 }, 00:16:31.757 { 00:16:31.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.757 "dma_device_type": 2 00:16:31.757 } 00:16:31.757 ], 00:16:31.757 "driver_specific": {} 00:16:31.757 } 00:16:31.757 ] 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.757 "name": "Existed_Raid", 00:16:31.757 "uuid": "68c787ca-6e40-4244-aed5-92f4fd4cec16", 00:16:31.757 "strip_size_kb": 64, 00:16:31.757 "state": "configuring", 00:16:31.757 "raid_level": "raid5f", 00:16:31.757 "superblock": true, 00:16:31.757 "num_base_bdevs": 3, 00:16:31.757 "num_base_bdevs_discovered": 1, 00:16:31.757 "num_base_bdevs_operational": 3, 00:16:31.757 "base_bdevs_list": [ 00:16:31.757 { 00:16:31.757 "name": "BaseBdev1", 00:16:31.757 "uuid": "e7d8696c-55cd-4445-8a50-35e71db4c15f", 00:16:31.757 "is_configured": true, 00:16:31.757 "data_offset": 2048, 00:16:31.757 "data_size": 63488 00:16:31.757 }, 00:16:31.757 { 00:16:31.757 "name": "BaseBdev2", 00:16:31.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.757 "is_configured": false, 00:16:31.757 "data_offset": 0, 00:16:31.757 "data_size": 0 00:16:31.757 }, 00:16:31.757 { 00:16:31.757 "name": "BaseBdev3", 00:16:31.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.757 "is_configured": false, 00:16:31.757 "data_offset": 0, 00:16:31.757 "data_size": 0 00:16:31.757 } 00:16:31.757 ] 00:16:31.757 }' 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.757 11:28:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.324 [2024-11-15 11:28:15.067607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.324 [2024-11-15 11:28:15.067695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.324 [2024-11-15 11:28:15.079650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.324 [2024-11-15 11:28:15.082251] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.324 [2024-11-15 11:28:15.082315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.324 [2024-11-15 11:28:15.082332] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.324 [2024-11-15 11:28:15.082350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.324 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.324 "name": "Existed_Raid", 00:16:32.324 "uuid": "3b929aa3-5cba-43d9-8c33-a1f1061c0b7c", 00:16:32.324 "strip_size_kb": 64, 00:16:32.324 "state": "configuring", 00:16:32.324 "raid_level": "raid5f", 00:16:32.324 "superblock": true, 00:16:32.324 "num_base_bdevs": 3, 00:16:32.324 "num_base_bdevs_discovered": 1, 00:16:32.324 "num_base_bdevs_operational": 3, 00:16:32.324 "base_bdevs_list": [ 00:16:32.324 { 00:16:32.324 "name": "BaseBdev1", 00:16:32.324 "uuid": "e7d8696c-55cd-4445-8a50-35e71db4c15f", 00:16:32.324 "is_configured": true, 00:16:32.324 "data_offset": 2048, 00:16:32.324 "data_size": 63488 00:16:32.324 }, 00:16:32.324 { 00:16:32.324 "name": "BaseBdev2", 00:16:32.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.324 "is_configured": false, 00:16:32.324 "data_offset": 0, 00:16:32.324 "data_size": 0 00:16:32.324 }, 00:16:32.324 { 00:16:32.325 "name": "BaseBdev3", 00:16:32.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.325 "is_configured": false, 00:16:32.325 "data_offset": 0, 00:16:32.325 "data_size": 0 00:16:32.325 } 00:16:32.325 ] 00:16:32.325 }' 00:16:32.325 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.325 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.892 [2024-11-15 11:28:15.651055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.892 BaseBdev2 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.892 [ 00:16:32.892 { 00:16:32.892 "name": "BaseBdev2", 00:16:32.892 "aliases": [ 00:16:32.892 "1c6806ff-e7e1-402e-b417-e3e050bc5a74" 00:16:32.892 ], 00:16:32.892 "product_name": "Malloc disk", 00:16:32.892 "block_size": 512, 00:16:32.892 "num_blocks": 65536, 00:16:32.892 "uuid": "1c6806ff-e7e1-402e-b417-e3e050bc5a74", 00:16:32.892 "assigned_rate_limits": { 00:16:32.892 "rw_ios_per_sec": 0, 00:16:32.892 "rw_mbytes_per_sec": 0, 00:16:32.892 "r_mbytes_per_sec": 0, 00:16:32.892 "w_mbytes_per_sec": 0 00:16:32.892 }, 00:16:32.892 "claimed": true, 00:16:32.892 "claim_type": "exclusive_write", 00:16:32.892 "zoned": false, 00:16:32.892 "supported_io_types": { 00:16:32.892 "read": true, 00:16:32.892 "write": true, 00:16:32.892 "unmap": true, 00:16:32.892 "flush": true, 00:16:32.892 "reset": true, 00:16:32.892 "nvme_admin": false, 00:16:32.892 "nvme_io": false, 00:16:32.892 "nvme_io_md": false, 00:16:32.892 "write_zeroes": true, 00:16:32.892 "zcopy": true, 00:16:32.892 "get_zone_info": false, 00:16:32.892 "zone_management": false, 00:16:32.892 "zone_append": false, 00:16:32.892 "compare": false, 00:16:32.892 "compare_and_write": false, 00:16:32.892 "abort": true, 00:16:32.892 "seek_hole": false, 00:16:32.892 "seek_data": false, 00:16:32.892 "copy": true, 00:16:32.892 "nvme_iov_md": false 00:16:32.892 }, 00:16:32.892 "memory_domains": [ 00:16:32.892 { 00:16:32.892 "dma_device_id": "system", 00:16:32.892 "dma_device_type": 1 00:16:32.892 }, 00:16:32.892 { 00:16:32.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.892 "dma_device_type": 2 00:16:32.892 } 00:16:32.892 ], 00:16:32.892 "driver_specific": {} 00:16:32.892 } 00:16:32.892 ] 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.892 "name": "Existed_Raid", 00:16:32.892 "uuid": "3b929aa3-5cba-43d9-8c33-a1f1061c0b7c", 00:16:32.892 "strip_size_kb": 64, 00:16:32.892 "state": "configuring", 00:16:32.892 "raid_level": "raid5f", 00:16:32.892 "superblock": true, 00:16:32.892 "num_base_bdevs": 3, 00:16:32.892 "num_base_bdevs_discovered": 2, 00:16:32.892 "num_base_bdevs_operational": 3, 00:16:32.892 "base_bdevs_list": [ 00:16:32.892 { 00:16:32.892 "name": "BaseBdev1", 00:16:32.892 "uuid": "e7d8696c-55cd-4445-8a50-35e71db4c15f", 00:16:32.892 "is_configured": true, 00:16:32.892 "data_offset": 2048, 00:16:32.892 "data_size": 63488 00:16:32.892 }, 00:16:32.892 { 00:16:32.892 "name": "BaseBdev2", 00:16:32.892 "uuid": "1c6806ff-e7e1-402e-b417-e3e050bc5a74", 00:16:32.892 "is_configured": true, 00:16:32.892 "data_offset": 2048, 00:16:32.892 "data_size": 63488 00:16:32.892 }, 00:16:32.892 { 00:16:32.892 "name": "BaseBdev3", 00:16:32.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.892 "is_configured": false, 00:16:32.892 "data_offset": 0, 00:16:32.892 "data_size": 0 00:16:32.892 } 00:16:32.892 ] 00:16:32.892 }' 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.892 11:28:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.461 [2024-11-15 11:28:16.250534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.461 [2024-11-15 11:28:16.250982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:33.461 [2024-11-15 11:28:16.251045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:33.461 [2024-11-15 11:28:16.251441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:33.461 BaseBdev3 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.461 [2024-11-15 11:28:16.257092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:33.461 [2024-11-15 11:28:16.257129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:33.461 [2024-11-15 11:28:16.257564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.461 [ 00:16:33.461 { 00:16:33.461 "name": "BaseBdev3", 00:16:33.461 "aliases": [ 00:16:33.461 "60515bbb-edad-4cf6-b641-2439058ce7a5" 00:16:33.461 ], 00:16:33.461 "product_name": "Malloc disk", 00:16:33.461 "block_size": 512, 00:16:33.461 "num_blocks": 65536, 00:16:33.461 "uuid": "60515bbb-edad-4cf6-b641-2439058ce7a5", 00:16:33.461 "assigned_rate_limits": { 00:16:33.461 "rw_ios_per_sec": 0, 00:16:33.461 "rw_mbytes_per_sec": 0, 00:16:33.461 "r_mbytes_per_sec": 0, 00:16:33.461 "w_mbytes_per_sec": 0 00:16:33.461 }, 00:16:33.461 "claimed": true, 00:16:33.461 "claim_type": "exclusive_write", 00:16:33.461 "zoned": false, 00:16:33.461 "supported_io_types": { 00:16:33.461 "read": true, 00:16:33.461 "write": true, 00:16:33.461 "unmap": true, 00:16:33.461 "flush": true, 00:16:33.461 "reset": true, 00:16:33.461 "nvme_admin": false, 00:16:33.461 "nvme_io": false, 00:16:33.461 "nvme_io_md": false, 00:16:33.461 "write_zeroes": true, 00:16:33.461 "zcopy": true, 00:16:33.461 "get_zone_info": false, 00:16:33.461 "zone_management": false, 00:16:33.461 "zone_append": false, 00:16:33.461 "compare": false, 00:16:33.461 "compare_and_write": false, 00:16:33.461 "abort": true, 00:16:33.461 "seek_hole": false, 00:16:33.461 "seek_data": false, 00:16:33.461 "copy": true, 00:16:33.461 "nvme_iov_md": false 00:16:33.461 }, 00:16:33.461 "memory_domains": [ 00:16:33.461 { 00:16:33.461 "dma_device_id": "system", 00:16:33.461 "dma_device_type": 1 00:16:33.461 }, 00:16:33.461 { 00:16:33.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.461 "dma_device_type": 2 00:16:33.461 } 00:16:33.461 ], 00:16:33.461 "driver_specific": {} 00:16:33.461 } 00:16:33.461 ] 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.461 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.461 "name": "Existed_Raid", 00:16:33.461 "uuid": "3b929aa3-5cba-43d9-8c33-a1f1061c0b7c", 00:16:33.461 "strip_size_kb": 64, 00:16:33.461 "state": "online", 00:16:33.461 "raid_level": "raid5f", 00:16:33.461 "superblock": true, 00:16:33.461 "num_base_bdevs": 3, 00:16:33.461 "num_base_bdevs_discovered": 3, 00:16:33.461 "num_base_bdevs_operational": 3, 00:16:33.461 "base_bdevs_list": [ 00:16:33.461 { 00:16:33.461 "name": "BaseBdev1", 00:16:33.461 "uuid": "e7d8696c-55cd-4445-8a50-35e71db4c15f", 00:16:33.461 "is_configured": true, 00:16:33.461 "data_offset": 2048, 00:16:33.461 "data_size": 63488 00:16:33.461 }, 00:16:33.461 { 00:16:33.461 "name": "BaseBdev2", 00:16:33.461 "uuid": "1c6806ff-e7e1-402e-b417-e3e050bc5a74", 00:16:33.461 "is_configured": true, 00:16:33.461 "data_offset": 2048, 00:16:33.461 "data_size": 63488 00:16:33.461 }, 00:16:33.461 { 00:16:33.461 "name": "BaseBdev3", 00:16:33.461 "uuid": "60515bbb-edad-4cf6-b641-2439058ce7a5", 00:16:33.461 "is_configured": true, 00:16:33.461 "data_offset": 2048, 00:16:33.461 "data_size": 63488 00:16:33.461 } 00:16:33.461 ] 00:16:33.462 }' 00:16:33.462 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.462 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.029 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.030 [2024-11-15 11:28:16.828335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:34.030 "name": "Existed_Raid", 00:16:34.030 "aliases": [ 00:16:34.030 "3b929aa3-5cba-43d9-8c33-a1f1061c0b7c" 00:16:34.030 ], 00:16:34.030 "product_name": "Raid Volume", 00:16:34.030 "block_size": 512, 00:16:34.030 "num_blocks": 126976, 00:16:34.030 "uuid": "3b929aa3-5cba-43d9-8c33-a1f1061c0b7c", 00:16:34.030 "assigned_rate_limits": { 00:16:34.030 "rw_ios_per_sec": 0, 00:16:34.030 "rw_mbytes_per_sec": 0, 00:16:34.030 "r_mbytes_per_sec": 0, 00:16:34.030 "w_mbytes_per_sec": 0 00:16:34.030 }, 00:16:34.030 "claimed": false, 00:16:34.030 "zoned": false, 00:16:34.030 "supported_io_types": { 00:16:34.030 "read": true, 00:16:34.030 "write": true, 00:16:34.030 "unmap": false, 00:16:34.030 "flush": false, 00:16:34.030 "reset": true, 00:16:34.030 "nvme_admin": false, 00:16:34.030 "nvme_io": false, 00:16:34.030 "nvme_io_md": false, 00:16:34.030 "write_zeroes": true, 00:16:34.030 "zcopy": false, 00:16:34.030 "get_zone_info": false, 00:16:34.030 "zone_management": false, 00:16:34.030 "zone_append": false, 00:16:34.030 "compare": false, 00:16:34.030 "compare_and_write": false, 00:16:34.030 "abort": false, 00:16:34.030 "seek_hole": false, 00:16:34.030 "seek_data": false, 00:16:34.030 "copy": false, 00:16:34.030 "nvme_iov_md": false 00:16:34.030 }, 00:16:34.030 "driver_specific": { 00:16:34.030 "raid": { 00:16:34.030 "uuid": "3b929aa3-5cba-43d9-8c33-a1f1061c0b7c", 00:16:34.030 "strip_size_kb": 64, 00:16:34.030 "state": "online", 00:16:34.030 "raid_level": "raid5f", 00:16:34.030 "superblock": true, 00:16:34.030 "num_base_bdevs": 3, 00:16:34.030 "num_base_bdevs_discovered": 3, 00:16:34.030 "num_base_bdevs_operational": 3, 00:16:34.030 "base_bdevs_list": [ 00:16:34.030 { 00:16:34.030 "name": "BaseBdev1", 00:16:34.030 "uuid": "e7d8696c-55cd-4445-8a50-35e71db4c15f", 00:16:34.030 "is_configured": true, 00:16:34.030 "data_offset": 2048, 00:16:34.030 "data_size": 63488 00:16:34.030 }, 00:16:34.030 { 00:16:34.030 "name": "BaseBdev2", 00:16:34.030 "uuid": "1c6806ff-e7e1-402e-b417-e3e050bc5a74", 00:16:34.030 "is_configured": true, 00:16:34.030 "data_offset": 2048, 00:16:34.030 "data_size": 63488 00:16:34.030 }, 00:16:34.030 { 00:16:34.030 "name": "BaseBdev3", 00:16:34.030 "uuid": "60515bbb-edad-4cf6-b641-2439058ce7a5", 00:16:34.030 "is_configured": true, 00:16:34.030 "data_offset": 2048, 00:16:34.030 "data_size": 63488 00:16:34.030 } 00:16:34.030 ] 00:16:34.030 } 00:16:34.030 } 00:16:34.030 }' 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:34.030 BaseBdev2 00:16:34.030 BaseBdev3' 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:34.030 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.289 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.289 11:28:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:34.289 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.289 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.289 11:28:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.289 [2024-11-15 11:28:17.136140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.289 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.548 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.548 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.548 "name": "Existed_Raid", 00:16:34.548 "uuid": "3b929aa3-5cba-43d9-8c33-a1f1061c0b7c", 00:16:34.548 "strip_size_kb": 64, 00:16:34.548 "state": "online", 00:16:34.548 "raid_level": "raid5f", 00:16:34.548 "superblock": true, 00:16:34.548 "num_base_bdevs": 3, 00:16:34.548 "num_base_bdevs_discovered": 2, 00:16:34.548 "num_base_bdevs_operational": 2, 00:16:34.548 "base_bdevs_list": [ 00:16:34.548 { 00:16:34.548 "name": null, 00:16:34.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.548 "is_configured": false, 00:16:34.548 "data_offset": 0, 00:16:34.548 "data_size": 63488 00:16:34.548 }, 00:16:34.548 { 00:16:34.548 "name": "BaseBdev2", 00:16:34.548 "uuid": "1c6806ff-e7e1-402e-b417-e3e050bc5a74", 00:16:34.548 "is_configured": true, 00:16:34.548 "data_offset": 2048, 00:16:34.548 "data_size": 63488 00:16:34.548 }, 00:16:34.548 { 00:16:34.548 "name": "BaseBdev3", 00:16:34.548 "uuid": "60515bbb-edad-4cf6-b641-2439058ce7a5", 00:16:34.548 "is_configured": true, 00:16:34.548 "data_offset": 2048, 00:16:34.548 "data_size": 63488 00:16:34.548 } 00:16:34.548 ] 00:16:34.548 }' 00:16:34.548 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.548 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.807 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:34.807 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:34.807 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.807 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:34.807 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.807 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.807 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.066 [2024-11-15 11:28:17.794955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:35.066 [2024-11-15 11:28:17.795188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.066 [2024-11-15 11:28:17.880485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.066 11:28:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.066 [2024-11-15 11:28:17.944570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:35.066 [2024-11-15 11:28:17.944688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 BaseBdev2 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 [ 00:16:35.326 { 00:16:35.326 "name": "BaseBdev2", 00:16:35.326 "aliases": [ 00:16:35.326 "f118930c-094f-479c-a29e-bd2809e3abc5" 00:16:35.326 ], 00:16:35.326 "product_name": "Malloc disk", 00:16:35.326 "block_size": 512, 00:16:35.326 "num_blocks": 65536, 00:16:35.326 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:35.326 "assigned_rate_limits": { 00:16:35.326 "rw_ios_per_sec": 0, 00:16:35.326 "rw_mbytes_per_sec": 0, 00:16:35.326 "r_mbytes_per_sec": 0, 00:16:35.326 "w_mbytes_per_sec": 0 00:16:35.326 }, 00:16:35.326 "claimed": false, 00:16:35.326 "zoned": false, 00:16:35.326 "supported_io_types": { 00:16:35.326 "read": true, 00:16:35.326 "write": true, 00:16:35.326 "unmap": true, 00:16:35.326 "flush": true, 00:16:35.326 "reset": true, 00:16:35.326 "nvme_admin": false, 00:16:35.326 "nvme_io": false, 00:16:35.326 "nvme_io_md": false, 00:16:35.326 "write_zeroes": true, 00:16:35.326 "zcopy": true, 00:16:35.326 "get_zone_info": false, 00:16:35.326 "zone_management": false, 00:16:35.326 "zone_append": false, 00:16:35.326 "compare": false, 00:16:35.326 "compare_and_write": false, 00:16:35.326 "abort": true, 00:16:35.326 "seek_hole": false, 00:16:35.326 "seek_data": false, 00:16:35.326 "copy": true, 00:16:35.326 "nvme_iov_md": false 00:16:35.326 }, 00:16:35.326 "memory_domains": [ 00:16:35.326 { 00:16:35.326 "dma_device_id": "system", 00:16:35.326 "dma_device_type": 1 00:16:35.326 }, 00:16:35.326 { 00:16:35.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.326 "dma_device_type": 2 00:16:35.326 } 00:16:35.326 ], 00:16:35.326 "driver_specific": {} 00:16:35.326 } 00:16:35.326 ] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 BaseBdev3 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 [ 00:16:35.326 { 00:16:35.326 "name": "BaseBdev3", 00:16:35.326 "aliases": [ 00:16:35.326 "e79895a0-5388-4048-a5c6-07c455616cc9" 00:16:35.326 ], 00:16:35.326 "product_name": "Malloc disk", 00:16:35.326 "block_size": 512, 00:16:35.326 "num_blocks": 65536, 00:16:35.326 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:35.326 "assigned_rate_limits": { 00:16:35.326 "rw_ios_per_sec": 0, 00:16:35.326 "rw_mbytes_per_sec": 0, 00:16:35.326 "r_mbytes_per_sec": 0, 00:16:35.326 "w_mbytes_per_sec": 0 00:16:35.326 }, 00:16:35.326 "claimed": false, 00:16:35.326 "zoned": false, 00:16:35.326 "supported_io_types": { 00:16:35.326 "read": true, 00:16:35.326 "write": true, 00:16:35.326 "unmap": true, 00:16:35.326 "flush": true, 00:16:35.326 "reset": true, 00:16:35.326 "nvme_admin": false, 00:16:35.326 "nvme_io": false, 00:16:35.326 "nvme_io_md": false, 00:16:35.326 "write_zeroes": true, 00:16:35.326 "zcopy": true, 00:16:35.326 "get_zone_info": false, 00:16:35.326 "zone_management": false, 00:16:35.326 "zone_append": false, 00:16:35.326 "compare": false, 00:16:35.326 "compare_and_write": false, 00:16:35.326 "abort": true, 00:16:35.326 "seek_hole": false, 00:16:35.326 "seek_data": false, 00:16:35.326 "copy": true, 00:16:35.326 "nvme_iov_md": false 00:16:35.326 }, 00:16:35.326 "memory_domains": [ 00:16:35.326 { 00:16:35.326 "dma_device_id": "system", 00:16:35.326 "dma_device_type": 1 00:16:35.326 }, 00:16:35.326 { 00:16:35.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.326 "dma_device_type": 2 00:16:35.326 } 00:16:35.326 ], 00:16:35.326 "driver_specific": {} 00:16:35.326 } 00:16:35.326 ] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.326 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.326 [2024-11-15 11:28:18.238336] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.326 [2024-11-15 11:28:18.238390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.326 [2024-11-15 11:28:18.238424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.326 [2024-11-15 11:28:18.240927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.327 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.585 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.586 "name": "Existed_Raid", 00:16:35.586 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:35.586 "strip_size_kb": 64, 00:16:35.586 "state": "configuring", 00:16:35.586 "raid_level": "raid5f", 00:16:35.586 "superblock": true, 00:16:35.586 "num_base_bdevs": 3, 00:16:35.586 "num_base_bdevs_discovered": 2, 00:16:35.586 "num_base_bdevs_operational": 3, 00:16:35.586 "base_bdevs_list": [ 00:16:35.586 { 00:16:35.586 "name": "BaseBdev1", 00:16:35.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.586 "is_configured": false, 00:16:35.586 "data_offset": 0, 00:16:35.586 "data_size": 0 00:16:35.586 }, 00:16:35.586 { 00:16:35.586 "name": "BaseBdev2", 00:16:35.586 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:35.586 "is_configured": true, 00:16:35.586 "data_offset": 2048, 00:16:35.586 "data_size": 63488 00:16:35.586 }, 00:16:35.586 { 00:16:35.586 "name": "BaseBdev3", 00:16:35.586 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:35.586 "is_configured": true, 00:16:35.586 "data_offset": 2048, 00:16:35.586 "data_size": 63488 00:16:35.586 } 00:16:35.586 ] 00:16:35.586 }' 00:16:35.586 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.586 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.844 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:35.844 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.844 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.103 [2024-11-15 11:28:18.794604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.103 "name": "Existed_Raid", 00:16:36.103 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:36.103 "strip_size_kb": 64, 00:16:36.103 "state": "configuring", 00:16:36.103 "raid_level": "raid5f", 00:16:36.103 "superblock": true, 00:16:36.103 "num_base_bdevs": 3, 00:16:36.103 "num_base_bdevs_discovered": 1, 00:16:36.103 "num_base_bdevs_operational": 3, 00:16:36.103 "base_bdevs_list": [ 00:16:36.103 { 00:16:36.103 "name": "BaseBdev1", 00:16:36.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.103 "is_configured": false, 00:16:36.103 "data_offset": 0, 00:16:36.103 "data_size": 0 00:16:36.103 }, 00:16:36.103 { 00:16:36.103 "name": null, 00:16:36.103 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:36.103 "is_configured": false, 00:16:36.103 "data_offset": 0, 00:16:36.103 "data_size": 63488 00:16:36.103 }, 00:16:36.103 { 00:16:36.103 "name": "BaseBdev3", 00:16:36.103 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:36.103 "is_configured": true, 00:16:36.103 "data_offset": 2048, 00:16:36.103 "data_size": 63488 00:16:36.103 } 00:16:36.103 ] 00:16:36.103 }' 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.103 11:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.671 [2024-11-15 11:28:19.433722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.671 BaseBdev1 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.671 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.671 [ 00:16:36.671 { 00:16:36.671 "name": "BaseBdev1", 00:16:36.671 "aliases": [ 00:16:36.671 "f4d9fec0-ee1d-49bd-a4f1-648eb238078f" 00:16:36.671 ], 00:16:36.671 "product_name": "Malloc disk", 00:16:36.671 "block_size": 512, 00:16:36.671 "num_blocks": 65536, 00:16:36.671 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:36.671 "assigned_rate_limits": { 00:16:36.671 "rw_ios_per_sec": 0, 00:16:36.671 "rw_mbytes_per_sec": 0, 00:16:36.671 "r_mbytes_per_sec": 0, 00:16:36.671 "w_mbytes_per_sec": 0 00:16:36.671 }, 00:16:36.671 "claimed": true, 00:16:36.671 "claim_type": "exclusive_write", 00:16:36.671 "zoned": false, 00:16:36.671 "supported_io_types": { 00:16:36.671 "read": true, 00:16:36.671 "write": true, 00:16:36.671 "unmap": true, 00:16:36.671 "flush": true, 00:16:36.671 "reset": true, 00:16:36.671 "nvme_admin": false, 00:16:36.671 "nvme_io": false, 00:16:36.671 "nvme_io_md": false, 00:16:36.671 "write_zeroes": true, 00:16:36.671 "zcopy": true, 00:16:36.671 "get_zone_info": false, 00:16:36.671 "zone_management": false, 00:16:36.671 "zone_append": false, 00:16:36.671 "compare": false, 00:16:36.671 "compare_and_write": false, 00:16:36.671 "abort": true, 00:16:36.671 "seek_hole": false, 00:16:36.671 "seek_data": false, 00:16:36.671 "copy": true, 00:16:36.671 "nvme_iov_md": false 00:16:36.671 }, 00:16:36.671 "memory_domains": [ 00:16:36.671 { 00:16:36.671 "dma_device_id": "system", 00:16:36.672 "dma_device_type": 1 00:16:36.672 }, 00:16:36.672 { 00:16:36.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.672 "dma_device_type": 2 00:16:36.672 } 00:16:36.672 ], 00:16:36.672 "driver_specific": {} 00:16:36.672 } 00:16:36.672 ] 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.672 "name": "Existed_Raid", 00:16:36.672 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:36.672 "strip_size_kb": 64, 00:16:36.672 "state": "configuring", 00:16:36.672 "raid_level": "raid5f", 00:16:36.672 "superblock": true, 00:16:36.672 "num_base_bdevs": 3, 00:16:36.672 "num_base_bdevs_discovered": 2, 00:16:36.672 "num_base_bdevs_operational": 3, 00:16:36.672 "base_bdevs_list": [ 00:16:36.672 { 00:16:36.672 "name": "BaseBdev1", 00:16:36.672 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:36.672 "is_configured": true, 00:16:36.672 "data_offset": 2048, 00:16:36.672 "data_size": 63488 00:16:36.672 }, 00:16:36.672 { 00:16:36.672 "name": null, 00:16:36.672 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:36.672 "is_configured": false, 00:16:36.672 "data_offset": 0, 00:16:36.672 "data_size": 63488 00:16:36.672 }, 00:16:36.672 { 00:16:36.672 "name": "BaseBdev3", 00:16:36.672 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:36.672 "is_configured": true, 00:16:36.672 "data_offset": 2048, 00:16:36.672 "data_size": 63488 00:16:36.672 } 00:16:36.672 ] 00:16:36.672 }' 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.672 11:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.240 [2024-11-15 11:28:20.077894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.240 "name": "Existed_Raid", 00:16:37.240 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:37.240 "strip_size_kb": 64, 00:16:37.240 "state": "configuring", 00:16:37.240 "raid_level": "raid5f", 00:16:37.240 "superblock": true, 00:16:37.240 "num_base_bdevs": 3, 00:16:37.240 "num_base_bdevs_discovered": 1, 00:16:37.240 "num_base_bdevs_operational": 3, 00:16:37.240 "base_bdevs_list": [ 00:16:37.240 { 00:16:37.240 "name": "BaseBdev1", 00:16:37.240 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:37.240 "is_configured": true, 00:16:37.240 "data_offset": 2048, 00:16:37.240 "data_size": 63488 00:16:37.240 }, 00:16:37.240 { 00:16:37.240 "name": null, 00:16:37.240 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:37.240 "is_configured": false, 00:16:37.240 "data_offset": 0, 00:16:37.240 "data_size": 63488 00:16:37.240 }, 00:16:37.240 { 00:16:37.240 "name": null, 00:16:37.240 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:37.240 "is_configured": false, 00:16:37.240 "data_offset": 0, 00:16:37.240 "data_size": 63488 00:16:37.240 } 00:16:37.240 ] 00:16:37.240 }' 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.240 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.820 [2024-11-15 11:28:20.650140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.820 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.821 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.821 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.821 "name": "Existed_Raid", 00:16:37.821 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:37.821 "strip_size_kb": 64, 00:16:37.821 "state": "configuring", 00:16:37.821 "raid_level": "raid5f", 00:16:37.821 "superblock": true, 00:16:37.821 "num_base_bdevs": 3, 00:16:37.821 "num_base_bdevs_discovered": 2, 00:16:37.821 "num_base_bdevs_operational": 3, 00:16:37.821 "base_bdevs_list": [ 00:16:37.821 { 00:16:37.821 "name": "BaseBdev1", 00:16:37.821 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:37.821 "is_configured": true, 00:16:37.821 "data_offset": 2048, 00:16:37.821 "data_size": 63488 00:16:37.821 }, 00:16:37.821 { 00:16:37.821 "name": null, 00:16:37.821 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:37.821 "is_configured": false, 00:16:37.821 "data_offset": 0, 00:16:37.821 "data_size": 63488 00:16:37.821 }, 00:16:37.821 { 00:16:37.821 "name": "BaseBdev3", 00:16:37.821 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:37.821 "is_configured": true, 00:16:37.821 "data_offset": 2048, 00:16:37.821 "data_size": 63488 00:16:37.821 } 00:16:37.821 ] 00:16:37.821 }' 00:16:37.821 11:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.821 11:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.387 [2024-11-15 11:28:21.234368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.387 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.645 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.645 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.645 "name": "Existed_Raid", 00:16:38.645 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:38.645 "strip_size_kb": 64, 00:16:38.645 "state": "configuring", 00:16:38.645 "raid_level": "raid5f", 00:16:38.645 "superblock": true, 00:16:38.645 "num_base_bdevs": 3, 00:16:38.645 "num_base_bdevs_discovered": 1, 00:16:38.645 "num_base_bdevs_operational": 3, 00:16:38.645 "base_bdevs_list": [ 00:16:38.645 { 00:16:38.645 "name": null, 00:16:38.645 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:38.645 "is_configured": false, 00:16:38.645 "data_offset": 0, 00:16:38.645 "data_size": 63488 00:16:38.645 }, 00:16:38.645 { 00:16:38.645 "name": null, 00:16:38.645 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:38.646 "is_configured": false, 00:16:38.646 "data_offset": 0, 00:16:38.646 "data_size": 63488 00:16:38.646 }, 00:16:38.646 { 00:16:38.646 "name": "BaseBdev3", 00:16:38.646 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:38.646 "is_configured": true, 00:16:38.646 "data_offset": 2048, 00:16:38.646 "data_size": 63488 00:16:38.646 } 00:16:38.646 ] 00:16:38.646 }' 00:16:38.646 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.646 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.904 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.904 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:38.904 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.904 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.162 [2024-11-15 11:28:21.901955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.162 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.163 "name": "Existed_Raid", 00:16:39.163 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:39.163 "strip_size_kb": 64, 00:16:39.163 "state": "configuring", 00:16:39.163 "raid_level": "raid5f", 00:16:39.163 "superblock": true, 00:16:39.163 "num_base_bdevs": 3, 00:16:39.163 "num_base_bdevs_discovered": 2, 00:16:39.163 "num_base_bdevs_operational": 3, 00:16:39.163 "base_bdevs_list": [ 00:16:39.163 { 00:16:39.163 "name": null, 00:16:39.163 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:39.163 "is_configured": false, 00:16:39.163 "data_offset": 0, 00:16:39.163 "data_size": 63488 00:16:39.163 }, 00:16:39.163 { 00:16:39.163 "name": "BaseBdev2", 00:16:39.163 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:39.163 "is_configured": true, 00:16:39.163 "data_offset": 2048, 00:16:39.163 "data_size": 63488 00:16:39.163 }, 00:16:39.163 { 00:16:39.163 "name": "BaseBdev3", 00:16:39.163 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:39.163 "is_configured": true, 00:16:39.163 "data_offset": 2048, 00:16:39.163 "data_size": 63488 00:16:39.163 } 00:16:39.163 ] 00:16:39.163 }' 00:16:39.163 11:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.163 11:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f4d9fec0-ee1d-49bd-a4f1-648eb238078f 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.731 [2024-11-15 11:28:22.597901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:39.731 [2024-11-15 11:28:22.598253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:39.731 [2024-11-15 11:28:22.598292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:39.731 [2024-11-15 11:28:22.598626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:39.731 NewBaseBdev 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.731 [2024-11-15 11:28:22.603424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:39.731 [2024-11-15 11:28:22.603448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:39.731 [2024-11-15 11:28:22.603646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.731 [ 00:16:39.731 { 00:16:39.731 "name": "NewBaseBdev", 00:16:39.731 "aliases": [ 00:16:39.731 "f4d9fec0-ee1d-49bd-a4f1-648eb238078f" 00:16:39.731 ], 00:16:39.731 "product_name": "Malloc disk", 00:16:39.731 "block_size": 512, 00:16:39.731 "num_blocks": 65536, 00:16:39.731 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:39.731 "assigned_rate_limits": { 00:16:39.731 "rw_ios_per_sec": 0, 00:16:39.731 "rw_mbytes_per_sec": 0, 00:16:39.731 "r_mbytes_per_sec": 0, 00:16:39.731 "w_mbytes_per_sec": 0 00:16:39.731 }, 00:16:39.731 "claimed": true, 00:16:39.731 "claim_type": "exclusive_write", 00:16:39.731 "zoned": false, 00:16:39.731 "supported_io_types": { 00:16:39.731 "read": true, 00:16:39.731 "write": true, 00:16:39.731 "unmap": true, 00:16:39.731 "flush": true, 00:16:39.731 "reset": true, 00:16:39.731 "nvme_admin": false, 00:16:39.731 "nvme_io": false, 00:16:39.731 "nvme_io_md": false, 00:16:39.731 "write_zeroes": true, 00:16:39.731 "zcopy": true, 00:16:39.731 "get_zone_info": false, 00:16:39.731 "zone_management": false, 00:16:39.731 "zone_append": false, 00:16:39.731 "compare": false, 00:16:39.731 "compare_and_write": false, 00:16:39.731 "abort": true, 00:16:39.731 "seek_hole": false, 00:16:39.731 "seek_data": false, 00:16:39.731 "copy": true, 00:16:39.731 "nvme_iov_md": false 00:16:39.731 }, 00:16:39.731 "memory_domains": [ 00:16:39.731 { 00:16:39.731 "dma_device_id": "system", 00:16:39.731 "dma_device_type": 1 00:16:39.731 }, 00:16:39.731 { 00:16:39.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.731 "dma_device_type": 2 00:16:39.731 } 00:16:39.731 ], 00:16:39.731 "driver_specific": {} 00:16:39.731 } 00:16:39.731 ] 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.731 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.990 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.990 "name": "Existed_Raid", 00:16:39.990 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:39.990 "strip_size_kb": 64, 00:16:39.990 "state": "online", 00:16:39.990 "raid_level": "raid5f", 00:16:39.990 "superblock": true, 00:16:39.990 "num_base_bdevs": 3, 00:16:39.990 "num_base_bdevs_discovered": 3, 00:16:39.990 "num_base_bdevs_operational": 3, 00:16:39.990 "base_bdevs_list": [ 00:16:39.990 { 00:16:39.990 "name": "NewBaseBdev", 00:16:39.990 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:39.990 "is_configured": true, 00:16:39.990 "data_offset": 2048, 00:16:39.990 "data_size": 63488 00:16:39.990 }, 00:16:39.990 { 00:16:39.990 "name": "BaseBdev2", 00:16:39.990 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:39.990 "is_configured": true, 00:16:39.990 "data_offset": 2048, 00:16:39.990 "data_size": 63488 00:16:39.990 }, 00:16:39.990 { 00:16:39.990 "name": "BaseBdev3", 00:16:39.990 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:39.990 "is_configured": true, 00:16:39.990 "data_offset": 2048, 00:16:39.990 "data_size": 63488 00:16:39.990 } 00:16:39.990 ] 00:16:39.990 }' 00:16:39.990 11:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.990 11:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.248 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:40.249 [2024-11-15 11:28:23.177795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.249 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:40.508 "name": "Existed_Raid", 00:16:40.508 "aliases": [ 00:16:40.508 "8a2f2c30-0612-48db-9761-6b3b3be70435" 00:16:40.508 ], 00:16:40.508 "product_name": "Raid Volume", 00:16:40.508 "block_size": 512, 00:16:40.508 "num_blocks": 126976, 00:16:40.508 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:40.508 "assigned_rate_limits": { 00:16:40.508 "rw_ios_per_sec": 0, 00:16:40.508 "rw_mbytes_per_sec": 0, 00:16:40.508 "r_mbytes_per_sec": 0, 00:16:40.508 "w_mbytes_per_sec": 0 00:16:40.508 }, 00:16:40.508 "claimed": false, 00:16:40.508 "zoned": false, 00:16:40.508 "supported_io_types": { 00:16:40.508 "read": true, 00:16:40.508 "write": true, 00:16:40.508 "unmap": false, 00:16:40.508 "flush": false, 00:16:40.508 "reset": true, 00:16:40.508 "nvme_admin": false, 00:16:40.508 "nvme_io": false, 00:16:40.508 "nvme_io_md": false, 00:16:40.508 "write_zeroes": true, 00:16:40.508 "zcopy": false, 00:16:40.508 "get_zone_info": false, 00:16:40.508 "zone_management": false, 00:16:40.508 "zone_append": false, 00:16:40.508 "compare": false, 00:16:40.508 "compare_and_write": false, 00:16:40.508 "abort": false, 00:16:40.508 "seek_hole": false, 00:16:40.508 "seek_data": false, 00:16:40.508 "copy": false, 00:16:40.508 "nvme_iov_md": false 00:16:40.508 }, 00:16:40.508 "driver_specific": { 00:16:40.508 "raid": { 00:16:40.508 "uuid": "8a2f2c30-0612-48db-9761-6b3b3be70435", 00:16:40.508 "strip_size_kb": 64, 00:16:40.508 "state": "online", 00:16:40.508 "raid_level": "raid5f", 00:16:40.508 "superblock": true, 00:16:40.508 "num_base_bdevs": 3, 00:16:40.508 "num_base_bdevs_discovered": 3, 00:16:40.508 "num_base_bdevs_operational": 3, 00:16:40.508 "base_bdevs_list": [ 00:16:40.508 { 00:16:40.508 "name": "NewBaseBdev", 00:16:40.508 "uuid": "f4d9fec0-ee1d-49bd-a4f1-648eb238078f", 00:16:40.508 "is_configured": true, 00:16:40.508 "data_offset": 2048, 00:16:40.508 "data_size": 63488 00:16:40.508 }, 00:16:40.508 { 00:16:40.508 "name": "BaseBdev2", 00:16:40.508 "uuid": "f118930c-094f-479c-a29e-bd2809e3abc5", 00:16:40.508 "is_configured": true, 00:16:40.508 "data_offset": 2048, 00:16:40.508 "data_size": 63488 00:16:40.508 }, 00:16:40.508 { 00:16:40.508 "name": "BaseBdev3", 00:16:40.508 "uuid": "e79895a0-5388-4048-a5c6-07c455616cc9", 00:16:40.508 "is_configured": true, 00:16:40.508 "data_offset": 2048, 00:16:40.508 "data_size": 63488 00:16:40.508 } 00:16:40.508 ] 00:16:40.508 } 00:16:40.508 } 00:16:40.508 }' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:40.508 BaseBdev2 00:16:40.508 BaseBdev3' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.508 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.768 [2024-11-15 11:28:23.509662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.768 [2024-11-15 11:28:23.509698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.768 [2024-11-15 11:28:23.509825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.768 [2024-11-15 11:28:23.510282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.768 [2024-11-15 11:28:23.510309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80733 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80733 ']' 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80733 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80733 00:16:40.768 killing process with pid 80733 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80733' 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80733 00:16:40.768 [2024-11-15 11:28:23.551131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.768 11:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80733 00:16:41.040 [2024-11-15 11:28:23.833779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.419 11:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:42.419 00:16:42.419 real 0m12.166s 00:16:42.419 user 0m20.052s 00:16:42.419 sys 0m1.847s 00:16:42.419 11:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:42.419 ************************************ 00:16:42.419 END TEST raid5f_state_function_test_sb 00:16:42.419 ************************************ 00:16:42.419 11:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 11:28:25 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:42.419 11:28:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:42.419 11:28:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:42.419 11:28:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 ************************************ 00:16:42.419 START TEST raid5f_superblock_test 00:16:42.419 ************************************ 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81365 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81365 00:16:42.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81365 ']' 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:42.419 11:28:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 [2024-11-15 11:28:25.137116] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:16:42.419 [2024-11-15 11:28:25.137375] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81365 ] 00:16:42.419 [2024-11-15 11:28:25.328810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.678 [2024-11-15 11:28:25.478771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.938 [2024-11-15 11:28:25.693362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.938 [2024-11-15 11:28:25.693403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.505 malloc1 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.505 [2024-11-15 11:28:26.199851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.505 [2024-11-15 11:28:26.200092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.505 [2024-11-15 11:28:26.200269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:43.505 [2024-11-15 11:28:26.200422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.505 [2024-11-15 11:28:26.203512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.505 [2024-11-15 11:28:26.203680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.505 pt1 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.505 malloc2 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.505 [2024-11-15 11:28:26.259716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:43.505 [2024-11-15 11:28:26.259807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.505 [2024-11-15 11:28:26.259846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:43.505 [2024-11-15 11:28:26.259861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.505 [2024-11-15 11:28:26.262838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.505 [2024-11-15 11:28:26.263035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:43.505 pt2 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:43.505 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.506 malloc3 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.506 [2024-11-15 11:28:26.331380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:43.506 [2024-11-15 11:28:26.331482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.506 [2024-11-15 11:28:26.331519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:43.506 [2024-11-15 11:28:26.331536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.506 [2024-11-15 11:28:26.334598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.506 [2024-11-15 11:28:26.334643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:43.506 pt3 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.506 [2024-11-15 11:28:26.343512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:43.506 [2024-11-15 11:28:26.346059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:43.506 [2024-11-15 11:28:26.346205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:43.506 [2024-11-15 11:28:26.346446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:43.506 [2024-11-15 11:28:26.346486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:43.506 [2024-11-15 11:28:26.346801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:43.506 [2024-11-15 11:28:26.351978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:43.506 [2024-11-15 11:28:26.352009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:43.506 [2024-11-15 11:28:26.352299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.506 "name": "raid_bdev1", 00:16:43.506 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:43.506 "strip_size_kb": 64, 00:16:43.506 "state": "online", 00:16:43.506 "raid_level": "raid5f", 00:16:43.506 "superblock": true, 00:16:43.506 "num_base_bdevs": 3, 00:16:43.506 "num_base_bdevs_discovered": 3, 00:16:43.506 "num_base_bdevs_operational": 3, 00:16:43.506 "base_bdevs_list": [ 00:16:43.506 { 00:16:43.506 "name": "pt1", 00:16:43.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.506 "is_configured": true, 00:16:43.506 "data_offset": 2048, 00:16:43.506 "data_size": 63488 00:16:43.506 }, 00:16:43.506 { 00:16:43.506 "name": "pt2", 00:16:43.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.506 "is_configured": true, 00:16:43.506 "data_offset": 2048, 00:16:43.506 "data_size": 63488 00:16:43.506 }, 00:16:43.506 { 00:16:43.506 "name": "pt3", 00:16:43.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.506 "is_configured": true, 00:16:43.506 "data_offset": 2048, 00:16:43.506 "data_size": 63488 00:16:43.506 } 00:16:43.506 ] 00:16:43.506 }' 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.506 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.073 [2024-11-15 11:28:26.894879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.073 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.073 "name": "raid_bdev1", 00:16:44.073 "aliases": [ 00:16:44.073 "b902a7f6-3049-48ba-aa02-1432ed0db4fd" 00:16:44.073 ], 00:16:44.073 "product_name": "Raid Volume", 00:16:44.073 "block_size": 512, 00:16:44.073 "num_blocks": 126976, 00:16:44.073 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:44.073 "assigned_rate_limits": { 00:16:44.073 "rw_ios_per_sec": 0, 00:16:44.073 "rw_mbytes_per_sec": 0, 00:16:44.073 "r_mbytes_per_sec": 0, 00:16:44.073 "w_mbytes_per_sec": 0 00:16:44.073 }, 00:16:44.073 "claimed": false, 00:16:44.073 "zoned": false, 00:16:44.073 "supported_io_types": { 00:16:44.073 "read": true, 00:16:44.073 "write": true, 00:16:44.073 "unmap": false, 00:16:44.073 "flush": false, 00:16:44.073 "reset": true, 00:16:44.073 "nvme_admin": false, 00:16:44.073 "nvme_io": false, 00:16:44.073 "nvme_io_md": false, 00:16:44.073 "write_zeroes": true, 00:16:44.073 "zcopy": false, 00:16:44.073 "get_zone_info": false, 00:16:44.073 "zone_management": false, 00:16:44.073 "zone_append": false, 00:16:44.073 "compare": false, 00:16:44.073 "compare_and_write": false, 00:16:44.073 "abort": false, 00:16:44.073 "seek_hole": false, 00:16:44.073 "seek_data": false, 00:16:44.073 "copy": false, 00:16:44.073 "nvme_iov_md": false 00:16:44.073 }, 00:16:44.073 "driver_specific": { 00:16:44.073 "raid": { 00:16:44.073 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:44.073 "strip_size_kb": 64, 00:16:44.073 "state": "online", 00:16:44.073 "raid_level": "raid5f", 00:16:44.073 "superblock": true, 00:16:44.073 "num_base_bdevs": 3, 00:16:44.073 "num_base_bdevs_discovered": 3, 00:16:44.073 "num_base_bdevs_operational": 3, 00:16:44.073 "base_bdevs_list": [ 00:16:44.073 { 00:16:44.073 "name": "pt1", 00:16:44.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.073 "is_configured": true, 00:16:44.073 "data_offset": 2048, 00:16:44.074 "data_size": 63488 00:16:44.074 }, 00:16:44.074 { 00:16:44.074 "name": "pt2", 00:16:44.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.074 "is_configured": true, 00:16:44.074 "data_offset": 2048, 00:16:44.074 "data_size": 63488 00:16:44.074 }, 00:16:44.074 { 00:16:44.074 "name": "pt3", 00:16:44.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.074 "is_configured": true, 00:16:44.074 "data_offset": 2048, 00:16:44.074 "data_size": 63488 00:16:44.074 } 00:16:44.074 ] 00:16:44.074 } 00:16:44.074 } 00:16:44.074 }' 00:16:44.074 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.074 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:44.074 pt2 00:16:44.074 pt3' 00:16:44.074 11:28:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 [2024-11-15 11:28:27.214881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b902a7f6-3049-48ba-aa02-1432ed0db4fd 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b902a7f6-3049-48ba-aa02-1432ed0db4fd ']' 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 [2024-11-15 11:28:27.266658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.333 [2024-11-15 11:28:27.266727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.333 [2024-11-15 11:28:27.266823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.333 [2024-11-15 11:28:27.266944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.333 [2024-11-15 11:28:27.266961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 [2024-11-15 11:28:27.414818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:44.593 [2024-11-15 11:28:27.417547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:44.593 [2024-11-15 11:28:27.417628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:44.593 [2024-11-15 11:28:27.417710] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:44.593 [2024-11-15 11:28:27.417789] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:44.593 [2024-11-15 11:28:27.417825] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:44.593 [2024-11-15 11:28:27.417855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.593 [2024-11-15 11:28:27.417870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:44.593 request: 00:16:44.593 { 00:16:44.593 "name": "raid_bdev1", 00:16:44.593 "raid_level": "raid5f", 00:16:44.593 "base_bdevs": [ 00:16:44.593 "malloc1", 00:16:44.593 "malloc2", 00:16:44.593 "malloc3" 00:16:44.593 ], 00:16:44.593 "strip_size_kb": 64, 00:16:44.593 "superblock": false, 00:16:44.593 "method": "bdev_raid_create", 00:16:44.593 "req_id": 1 00:16:44.593 } 00:16:44.593 Got JSON-RPC error response 00:16:44.593 response: 00:16:44.593 { 00:16:44.593 "code": -17, 00:16:44.593 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:44.593 } 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 [2024-11-15 11:28:27.478764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:44.593 [2024-11-15 11:28:27.478862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.593 [2024-11-15 11:28:27.478896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:44.593 [2024-11-15 11:28:27.478912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.593 [2024-11-15 11:28:27.482174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.593 [2024-11-15 11:28:27.482233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:44.593 [2024-11-15 11:28:27.482377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:44.593 [2024-11-15 11:28:27.482453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:44.593 pt1 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.593 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.852 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.852 "name": "raid_bdev1", 00:16:44.852 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:44.852 "strip_size_kb": 64, 00:16:44.852 "state": "configuring", 00:16:44.852 "raid_level": "raid5f", 00:16:44.852 "superblock": true, 00:16:44.852 "num_base_bdevs": 3, 00:16:44.852 "num_base_bdevs_discovered": 1, 00:16:44.852 "num_base_bdevs_operational": 3, 00:16:44.852 "base_bdevs_list": [ 00:16:44.852 { 00:16:44.852 "name": "pt1", 00:16:44.852 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.852 "is_configured": true, 00:16:44.852 "data_offset": 2048, 00:16:44.852 "data_size": 63488 00:16:44.852 }, 00:16:44.852 { 00:16:44.852 "name": null, 00:16:44.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.852 "is_configured": false, 00:16:44.852 "data_offset": 2048, 00:16:44.852 "data_size": 63488 00:16:44.852 }, 00:16:44.852 { 00:16:44.852 "name": null, 00:16:44.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.852 "is_configured": false, 00:16:44.852 "data_offset": 2048, 00:16:44.852 "data_size": 63488 00:16:44.852 } 00:16:44.852 ] 00:16:44.852 }' 00:16:44.852 11:28:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.852 11:28:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.111 [2024-11-15 11:28:28.006958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.111 [2024-11-15 11:28:28.007066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.111 [2024-11-15 11:28:28.007105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:45.111 [2024-11-15 11:28:28.007121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.111 [2024-11-15 11:28:28.007782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.111 [2024-11-15 11:28:28.007831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.111 [2024-11-15 11:28:28.007962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:45.111 [2024-11-15 11:28:28.008005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.111 pt2 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.111 [2024-11-15 11:28:28.014917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.111 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.390 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.390 "name": "raid_bdev1", 00:16:45.390 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:45.390 "strip_size_kb": 64, 00:16:45.390 "state": "configuring", 00:16:45.390 "raid_level": "raid5f", 00:16:45.390 "superblock": true, 00:16:45.390 "num_base_bdevs": 3, 00:16:45.390 "num_base_bdevs_discovered": 1, 00:16:45.390 "num_base_bdevs_operational": 3, 00:16:45.390 "base_bdevs_list": [ 00:16:45.390 { 00:16:45.390 "name": "pt1", 00:16:45.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:45.390 "is_configured": true, 00:16:45.390 "data_offset": 2048, 00:16:45.390 "data_size": 63488 00:16:45.390 }, 00:16:45.390 { 00:16:45.390 "name": null, 00:16:45.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.390 "is_configured": false, 00:16:45.390 "data_offset": 0, 00:16:45.390 "data_size": 63488 00:16:45.390 }, 00:16:45.390 { 00:16:45.390 "name": null, 00:16:45.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.390 "is_configured": false, 00:16:45.390 "data_offset": 2048, 00:16:45.390 "data_size": 63488 00:16:45.390 } 00:16:45.390 ] 00:16:45.390 }' 00:16:45.390 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.390 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.648 [2024-11-15 11:28:28.567137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.648 [2024-11-15 11:28:28.567286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.648 [2024-11-15 11:28:28.567318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:45.648 [2024-11-15 11:28:28.567337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.648 [2024-11-15 11:28:28.568000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.648 [2024-11-15 11:28:28.568067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.648 [2024-11-15 11:28:28.568202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:45.648 [2024-11-15 11:28:28.568245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.648 pt2 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.648 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.648 [2024-11-15 11:28:28.579108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:45.648 [2024-11-15 11:28:28.579233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.648 [2024-11-15 11:28:28.579260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:45.648 [2024-11-15 11:28:28.579278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.649 [2024-11-15 11:28:28.579822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.649 [2024-11-15 11:28:28.579873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:45.649 [2024-11-15 11:28:28.579966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:45.649 [2024-11-15 11:28:28.580003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:45.649 [2024-11-15 11:28:28.580202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:45.649 [2024-11-15 11:28:28.580235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:45.649 [2024-11-15 11:28:28.580577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:45.649 [2024-11-15 11:28:28.585755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:45.649 [2024-11-15 11:28:28.585784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:45.649 [2024-11-15 11:28:28.586067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.649 pt3 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.649 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.908 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.908 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.908 "name": "raid_bdev1", 00:16:45.908 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:45.908 "strip_size_kb": 64, 00:16:45.908 "state": "online", 00:16:45.908 "raid_level": "raid5f", 00:16:45.908 "superblock": true, 00:16:45.908 "num_base_bdevs": 3, 00:16:45.908 "num_base_bdevs_discovered": 3, 00:16:45.908 "num_base_bdevs_operational": 3, 00:16:45.908 "base_bdevs_list": [ 00:16:45.908 { 00:16:45.908 "name": "pt1", 00:16:45.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:45.908 "is_configured": true, 00:16:45.908 "data_offset": 2048, 00:16:45.908 "data_size": 63488 00:16:45.908 }, 00:16:45.908 { 00:16:45.908 "name": "pt2", 00:16:45.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.908 "is_configured": true, 00:16:45.908 "data_offset": 2048, 00:16:45.908 "data_size": 63488 00:16:45.908 }, 00:16:45.908 { 00:16:45.908 "name": "pt3", 00:16:45.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.908 "is_configured": true, 00:16:45.908 "data_offset": 2048, 00:16:45.908 "data_size": 63488 00:16:45.908 } 00:16:45.908 ] 00:16:45.908 }' 00:16:45.908 11:28:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.908 11:28:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.166 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.166 [2024-11-15 11:28:29.100898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:46.425 "name": "raid_bdev1", 00:16:46.425 "aliases": [ 00:16:46.425 "b902a7f6-3049-48ba-aa02-1432ed0db4fd" 00:16:46.425 ], 00:16:46.425 "product_name": "Raid Volume", 00:16:46.425 "block_size": 512, 00:16:46.425 "num_blocks": 126976, 00:16:46.425 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:46.425 "assigned_rate_limits": { 00:16:46.425 "rw_ios_per_sec": 0, 00:16:46.425 "rw_mbytes_per_sec": 0, 00:16:46.425 "r_mbytes_per_sec": 0, 00:16:46.425 "w_mbytes_per_sec": 0 00:16:46.425 }, 00:16:46.425 "claimed": false, 00:16:46.425 "zoned": false, 00:16:46.425 "supported_io_types": { 00:16:46.425 "read": true, 00:16:46.425 "write": true, 00:16:46.425 "unmap": false, 00:16:46.425 "flush": false, 00:16:46.425 "reset": true, 00:16:46.425 "nvme_admin": false, 00:16:46.425 "nvme_io": false, 00:16:46.425 "nvme_io_md": false, 00:16:46.425 "write_zeroes": true, 00:16:46.425 "zcopy": false, 00:16:46.425 "get_zone_info": false, 00:16:46.425 "zone_management": false, 00:16:46.425 "zone_append": false, 00:16:46.425 "compare": false, 00:16:46.425 "compare_and_write": false, 00:16:46.425 "abort": false, 00:16:46.425 "seek_hole": false, 00:16:46.425 "seek_data": false, 00:16:46.425 "copy": false, 00:16:46.425 "nvme_iov_md": false 00:16:46.425 }, 00:16:46.425 "driver_specific": { 00:16:46.425 "raid": { 00:16:46.425 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:46.425 "strip_size_kb": 64, 00:16:46.425 "state": "online", 00:16:46.425 "raid_level": "raid5f", 00:16:46.425 "superblock": true, 00:16:46.425 "num_base_bdevs": 3, 00:16:46.425 "num_base_bdevs_discovered": 3, 00:16:46.425 "num_base_bdevs_operational": 3, 00:16:46.425 "base_bdevs_list": [ 00:16:46.425 { 00:16:46.425 "name": "pt1", 00:16:46.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:46.425 "is_configured": true, 00:16:46.425 "data_offset": 2048, 00:16:46.425 "data_size": 63488 00:16:46.425 }, 00:16:46.425 { 00:16:46.425 "name": "pt2", 00:16:46.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.425 "is_configured": true, 00:16:46.425 "data_offset": 2048, 00:16:46.425 "data_size": 63488 00:16:46.425 }, 00:16:46.425 { 00:16:46.425 "name": "pt3", 00:16:46.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.425 "is_configured": true, 00:16:46.425 "data_offset": 2048, 00:16:46.425 "data_size": 63488 00:16:46.425 } 00:16:46.425 ] 00:16:46.425 } 00:16:46.425 } 00:16:46.425 }' 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:46.425 pt2 00:16:46.425 pt3' 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.425 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.426 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:46.683 [2024-11-15 11:28:29.428935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b902a7f6-3049-48ba-aa02-1432ed0db4fd '!=' b902a7f6-3049-48ba-aa02-1432ed0db4fd ']' 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.683 [2024-11-15 11:28:29.480793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.683 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.683 "name": "raid_bdev1", 00:16:46.683 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:46.683 "strip_size_kb": 64, 00:16:46.683 "state": "online", 00:16:46.683 "raid_level": "raid5f", 00:16:46.683 "superblock": true, 00:16:46.683 "num_base_bdevs": 3, 00:16:46.683 "num_base_bdevs_discovered": 2, 00:16:46.683 "num_base_bdevs_operational": 2, 00:16:46.683 "base_bdevs_list": [ 00:16:46.683 { 00:16:46.683 "name": null, 00:16:46.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.684 "is_configured": false, 00:16:46.684 "data_offset": 0, 00:16:46.684 "data_size": 63488 00:16:46.684 }, 00:16:46.684 { 00:16:46.684 "name": "pt2", 00:16:46.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.684 "is_configured": true, 00:16:46.684 "data_offset": 2048, 00:16:46.684 "data_size": 63488 00:16:46.684 }, 00:16:46.684 { 00:16:46.684 "name": "pt3", 00:16:46.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.684 "is_configured": true, 00:16:46.684 "data_offset": 2048, 00:16:46.684 "data_size": 63488 00:16:46.684 } 00:16:46.684 ] 00:16:46.684 }' 00:16:46.684 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.684 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.249 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:47.249 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.249 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.249 [2024-11-15 11:28:29.992834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.249 [2024-11-15 11:28:29.992887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.249 [2024-11-15 11:28:29.992991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.249 [2024-11-15 11:28:29.993069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.249 [2024-11-15 11:28:29.993106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:47.249 11:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.249 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.249 11:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.249 [2024-11-15 11:28:30.072783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:47.249 [2024-11-15 11:28:30.072876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.249 [2024-11-15 11:28:30.072899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:47.249 [2024-11-15 11:28:30.072915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.249 [2024-11-15 11:28:30.075956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.249 [2024-11-15 11:28:30.076031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:47.249 [2024-11-15 11:28:30.076121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:47.249 [2024-11-15 11:28:30.076214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:47.249 pt2 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.249 "name": "raid_bdev1", 00:16:47.249 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:47.249 "strip_size_kb": 64, 00:16:47.249 "state": "configuring", 00:16:47.249 "raid_level": "raid5f", 00:16:47.249 "superblock": true, 00:16:47.249 "num_base_bdevs": 3, 00:16:47.249 "num_base_bdevs_discovered": 1, 00:16:47.249 "num_base_bdevs_operational": 2, 00:16:47.249 "base_bdevs_list": [ 00:16:47.249 { 00:16:47.249 "name": null, 00:16:47.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.249 "is_configured": false, 00:16:47.249 "data_offset": 2048, 00:16:47.249 "data_size": 63488 00:16:47.249 }, 00:16:47.249 { 00:16:47.249 "name": "pt2", 00:16:47.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.249 "is_configured": true, 00:16:47.249 "data_offset": 2048, 00:16:47.249 "data_size": 63488 00:16:47.249 }, 00:16:47.249 { 00:16:47.249 "name": null, 00:16:47.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.249 "is_configured": false, 00:16:47.249 "data_offset": 2048, 00:16:47.249 "data_size": 63488 00:16:47.249 } 00:16:47.249 ] 00:16:47.249 }' 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.249 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.816 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.817 [2024-11-15 11:28:30.597034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:47.817 [2024-11-15 11:28:30.597170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.817 [2024-11-15 11:28:30.597264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:47.817 [2024-11-15 11:28:30.597301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.817 [2024-11-15 11:28:30.597973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.817 [2024-11-15 11:28:30.598033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:47.817 [2024-11-15 11:28:30.598172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:47.817 [2024-11-15 11:28:30.598233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:47.817 [2024-11-15 11:28:30.598412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:47.817 [2024-11-15 11:28:30.598445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:47.817 [2024-11-15 11:28:30.598817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:47.817 [2024-11-15 11:28:30.603811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:47.817 [2024-11-15 11:28:30.603856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:47.817 [2024-11-15 11:28:30.604280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.817 pt3 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.817 "name": "raid_bdev1", 00:16:47.817 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:47.817 "strip_size_kb": 64, 00:16:47.817 "state": "online", 00:16:47.817 "raid_level": "raid5f", 00:16:47.817 "superblock": true, 00:16:47.817 "num_base_bdevs": 3, 00:16:47.817 "num_base_bdevs_discovered": 2, 00:16:47.817 "num_base_bdevs_operational": 2, 00:16:47.817 "base_bdevs_list": [ 00:16:47.817 { 00:16:47.817 "name": null, 00:16:47.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.817 "is_configured": false, 00:16:47.817 "data_offset": 2048, 00:16:47.817 "data_size": 63488 00:16:47.817 }, 00:16:47.817 { 00:16:47.817 "name": "pt2", 00:16:47.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.817 "is_configured": true, 00:16:47.817 "data_offset": 2048, 00:16:47.817 "data_size": 63488 00:16:47.817 }, 00:16:47.817 { 00:16:47.817 "name": "pt3", 00:16:47.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.817 "is_configured": true, 00:16:47.817 "data_offset": 2048, 00:16:47.817 "data_size": 63488 00:16:47.817 } 00:16:47.817 ] 00:16:47.817 }' 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.817 11:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.384 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.384 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.384 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.384 [2024-11-15 11:28:31.114840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.384 [2024-11-15 11:28:31.114898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.384 [2024-11-15 11:28:31.115037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.384 [2024-11-15 11:28:31.115169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.384 [2024-11-15 11:28:31.115210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.385 [2024-11-15 11:28:31.186875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.385 [2024-11-15 11:28:31.186967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.385 [2024-11-15 11:28:31.187000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:48.385 [2024-11-15 11:28:31.187014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.385 [2024-11-15 11:28:31.190334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.385 [2024-11-15 11:28:31.190394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.385 [2024-11-15 11:28:31.190535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:48.385 [2024-11-15 11:28:31.190643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.385 [2024-11-15 11:28:31.190833] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:48.385 [2024-11-15 11:28:31.190862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.385 [2024-11-15 11:28:31.190886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:48.385 [2024-11-15 11:28:31.190949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.385 pt1 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.385 "name": "raid_bdev1", 00:16:48.385 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:48.385 "strip_size_kb": 64, 00:16:48.385 "state": "configuring", 00:16:48.385 "raid_level": "raid5f", 00:16:48.385 "superblock": true, 00:16:48.385 "num_base_bdevs": 3, 00:16:48.385 "num_base_bdevs_discovered": 1, 00:16:48.385 "num_base_bdevs_operational": 2, 00:16:48.385 "base_bdevs_list": [ 00:16:48.385 { 00:16:48.385 "name": null, 00:16:48.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.385 "is_configured": false, 00:16:48.385 "data_offset": 2048, 00:16:48.385 "data_size": 63488 00:16:48.385 }, 00:16:48.385 { 00:16:48.385 "name": "pt2", 00:16:48.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.385 "is_configured": true, 00:16:48.385 "data_offset": 2048, 00:16:48.385 "data_size": 63488 00:16:48.385 }, 00:16:48.385 { 00:16:48.385 "name": null, 00:16:48.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.385 "is_configured": false, 00:16:48.385 "data_offset": 2048, 00:16:48.385 "data_size": 63488 00:16:48.385 } 00:16:48.385 ] 00:16:48.385 }' 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.385 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.952 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:48.952 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.952 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.952 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:48.952 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.952 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.953 [2024-11-15 11:28:31.759030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:48.953 [2024-11-15 11:28:31.759137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.953 [2024-11-15 11:28:31.759168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:48.953 [2024-11-15 11:28:31.759226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.953 [2024-11-15 11:28:31.759948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.953 [2024-11-15 11:28:31.760001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:48.953 [2024-11-15 11:28:31.760135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:48.953 [2024-11-15 11:28:31.760167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:48.953 [2024-11-15 11:28:31.760384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:48.953 [2024-11-15 11:28:31.760410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:48.953 [2024-11-15 11:28:31.760767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:48.953 [2024-11-15 11:28:31.765736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:48.953 [2024-11-15 11:28:31.765784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:48.953 [2024-11-15 11:28:31.766144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.953 pt3 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.953 "name": "raid_bdev1", 00:16:48.953 "uuid": "b902a7f6-3049-48ba-aa02-1432ed0db4fd", 00:16:48.953 "strip_size_kb": 64, 00:16:48.953 "state": "online", 00:16:48.953 "raid_level": "raid5f", 00:16:48.953 "superblock": true, 00:16:48.953 "num_base_bdevs": 3, 00:16:48.953 "num_base_bdevs_discovered": 2, 00:16:48.953 "num_base_bdevs_operational": 2, 00:16:48.953 "base_bdevs_list": [ 00:16:48.953 { 00:16:48.953 "name": null, 00:16:48.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.953 "is_configured": false, 00:16:48.953 "data_offset": 2048, 00:16:48.953 "data_size": 63488 00:16:48.953 }, 00:16:48.953 { 00:16:48.953 "name": "pt2", 00:16:48.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.953 "is_configured": true, 00:16:48.953 "data_offset": 2048, 00:16:48.953 "data_size": 63488 00:16:48.953 }, 00:16:48.953 { 00:16:48.953 "name": "pt3", 00:16:48.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.953 "is_configured": true, 00:16:48.953 "data_offset": 2048, 00:16:48.953 "data_size": 63488 00:16:48.953 } 00:16:48.953 ] 00:16:48.953 }' 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.953 11:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:49.520 [2024-11-15 11:28:32.364853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b902a7f6-3049-48ba-aa02-1432ed0db4fd '!=' b902a7f6-3049-48ba-aa02-1432ed0db4fd ']' 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81365 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81365 ']' 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81365 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81365 00:16:49.520 killing process with pid 81365 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81365' 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81365 00:16:49.520 [2024-11-15 11:28:32.444428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.520 11:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81365 00:16:49.520 [2024-11-15 11:28:32.444629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.520 [2024-11-15 11:28:32.444712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.520 [2024-11-15 11:28:32.444763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:50.087 [2024-11-15 11:28:32.735715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.023 ************************************ 00:16:51.023 END TEST raid5f_superblock_test 00:16:51.023 ************************************ 00:16:51.023 11:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:51.023 00:16:51.023 real 0m8.816s 00:16:51.023 user 0m14.313s 00:16:51.023 sys 0m1.339s 00:16:51.023 11:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:51.023 11:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.023 11:28:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:51.023 11:28:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:51.023 11:28:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:51.023 11:28:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:51.023 11:28:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.023 ************************************ 00:16:51.023 START TEST raid5f_rebuild_test 00:16:51.023 ************************************ 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:51.023 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81814 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81814 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81814 ']' 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:51.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:51.024 11:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.282 [2024-11-15 11:28:34.016746] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:16:51.282 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:51.282 Zero copy mechanism will not be used. 00:16:51.282 [2024-11-15 11:28:34.016948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81814 ] 00:16:51.282 [2024-11-15 11:28:34.204670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.541 [2024-11-15 11:28:34.353779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.799 [2024-11-15 11:28:34.582060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.799 [2024-11-15 11:28:34.582162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.058 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:52.058 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:52.058 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.058 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:52.058 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.058 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.317 BaseBdev1_malloc 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 [2024-11-15 11:28:35.056427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:52.318 [2024-11-15 11:28:35.056510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.318 [2024-11-15 11:28:35.056549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:52.318 [2024-11-15 11:28:35.056571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.318 [2024-11-15 11:28:35.059858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.318 [2024-11-15 11:28:35.059923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:52.318 BaseBdev1 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 BaseBdev2_malloc 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 [2024-11-15 11:28:35.116680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:52.318 [2024-11-15 11:28:35.116770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.318 [2024-11-15 11:28:35.116810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:52.318 [2024-11-15 11:28:35.116829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.318 [2024-11-15 11:28:35.119932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.318 [2024-11-15 11:28:35.119980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:52.318 BaseBdev2 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 BaseBdev3_malloc 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 [2024-11-15 11:28:35.190003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:52.318 [2024-11-15 11:28:35.190079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.318 [2024-11-15 11:28:35.190144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:52.318 [2024-11-15 11:28:35.190166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.318 [2024-11-15 11:28:35.193171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.318 [2024-11-15 11:28:35.193253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:52.318 BaseBdev3 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 spare_malloc 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 spare_delay 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 [2024-11-15 11:28:35.257231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:52.318 [2024-11-15 11:28:35.257312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.318 [2024-11-15 11:28:35.257352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:52.318 [2024-11-15 11:28:35.257373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.318 [2024-11-15 11:28:35.260515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.318 [2024-11-15 11:28:35.260570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:52.318 spare 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.318 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.577 [2024-11-15 11:28:35.269335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.577 [2024-11-15 11:28:35.272175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.577 [2024-11-15 11:28:35.272436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.577 [2024-11-15 11:28:35.272703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:52.577 [2024-11-15 11:28:35.272822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:52.577 [2024-11-15 11:28:35.273257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:52.577 [2024-11-15 11:28:35.278713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:52.577 [2024-11-15 11:28:35.278881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:52.577 [2024-11-15 11:28:35.279360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.577 "name": "raid_bdev1", 00:16:52.577 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:16:52.577 "strip_size_kb": 64, 00:16:52.577 "state": "online", 00:16:52.577 "raid_level": "raid5f", 00:16:52.577 "superblock": false, 00:16:52.577 "num_base_bdevs": 3, 00:16:52.577 "num_base_bdevs_discovered": 3, 00:16:52.577 "num_base_bdevs_operational": 3, 00:16:52.577 "base_bdevs_list": [ 00:16:52.577 { 00:16:52.577 "name": "BaseBdev1", 00:16:52.577 "uuid": "ca6ae5be-5050-51fa-afe3-852a659b0e99", 00:16:52.577 "is_configured": true, 00:16:52.577 "data_offset": 0, 00:16:52.577 "data_size": 65536 00:16:52.577 }, 00:16:52.577 { 00:16:52.577 "name": "BaseBdev2", 00:16:52.577 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:16:52.577 "is_configured": true, 00:16:52.577 "data_offset": 0, 00:16:52.577 "data_size": 65536 00:16:52.577 }, 00:16:52.577 { 00:16:52.577 "name": "BaseBdev3", 00:16:52.577 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:16:52.577 "is_configured": true, 00:16:52.577 "data_offset": 0, 00:16:52.577 "data_size": 65536 00:16:52.577 } 00:16:52.577 ] 00:16:52.577 }' 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.577 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.146 [2024-11-15 11:28:35.818023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:53.146 11:28:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:53.405 [2024-11-15 11:28:36.201973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:53.405 /dev/nbd0 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:53.405 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.405 1+0 records in 00:16:53.405 1+0 records out 00:16:53.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336141 s, 12.2 MB/s 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:53.406 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:53.972 512+0 records in 00:16:53.972 512+0 records out 00:16:53.972 67108864 bytes (67 MB, 64 MiB) copied, 0.451296 s, 149 MB/s 00:16:53.972 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:53.972 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.972 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:53.972 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:53.972 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:53.972 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.972 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:54.231 [2024-11-15 11:28:36.945328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.231 [2024-11-15 11:28:36.980688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.231 11:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.231 11:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.231 11:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.231 "name": "raid_bdev1", 00:16:54.231 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:16:54.231 "strip_size_kb": 64, 00:16:54.231 "state": "online", 00:16:54.231 "raid_level": "raid5f", 00:16:54.231 "superblock": false, 00:16:54.231 "num_base_bdevs": 3, 00:16:54.231 "num_base_bdevs_discovered": 2, 00:16:54.231 "num_base_bdevs_operational": 2, 00:16:54.231 "base_bdevs_list": [ 00:16:54.231 { 00:16:54.231 "name": null, 00:16:54.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.231 "is_configured": false, 00:16:54.231 "data_offset": 0, 00:16:54.231 "data_size": 65536 00:16:54.231 }, 00:16:54.231 { 00:16:54.231 "name": "BaseBdev2", 00:16:54.231 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:16:54.231 "is_configured": true, 00:16:54.231 "data_offset": 0, 00:16:54.231 "data_size": 65536 00:16:54.231 }, 00:16:54.231 { 00:16:54.231 "name": "BaseBdev3", 00:16:54.231 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:16:54.231 "is_configured": true, 00:16:54.231 "data_offset": 0, 00:16:54.231 "data_size": 65536 00:16:54.231 } 00:16:54.231 ] 00:16:54.231 }' 00:16:54.231 11:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.231 11:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.799 11:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.799 11:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.799 11:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.799 [2024-11-15 11:28:37.492924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.799 [2024-11-15 11:28:37.510163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:54.799 11:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.799 11:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:54.799 [2024-11-15 11:28:37.517902] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.736 "name": "raid_bdev1", 00:16:55.736 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:16:55.736 "strip_size_kb": 64, 00:16:55.736 "state": "online", 00:16:55.736 "raid_level": "raid5f", 00:16:55.736 "superblock": false, 00:16:55.736 "num_base_bdevs": 3, 00:16:55.736 "num_base_bdevs_discovered": 3, 00:16:55.736 "num_base_bdevs_operational": 3, 00:16:55.736 "process": { 00:16:55.736 "type": "rebuild", 00:16:55.736 "target": "spare", 00:16:55.736 "progress": { 00:16:55.736 "blocks": 18432, 00:16:55.736 "percent": 14 00:16:55.736 } 00:16:55.736 }, 00:16:55.736 "base_bdevs_list": [ 00:16:55.736 { 00:16:55.736 "name": "spare", 00:16:55.736 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:16:55.736 "is_configured": true, 00:16:55.736 "data_offset": 0, 00:16:55.736 "data_size": 65536 00:16:55.736 }, 00:16:55.736 { 00:16:55.736 "name": "BaseBdev2", 00:16:55.736 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:16:55.736 "is_configured": true, 00:16:55.736 "data_offset": 0, 00:16:55.736 "data_size": 65536 00:16:55.736 }, 00:16:55.736 { 00:16:55.736 "name": "BaseBdev3", 00:16:55.736 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:16:55.736 "is_configured": true, 00:16:55.736 "data_offset": 0, 00:16:55.736 "data_size": 65536 00:16:55.736 } 00:16:55.736 ] 00:16:55.736 }' 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.736 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.736 [2024-11-15 11:28:38.671781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.996 [2024-11-15 11:28:38.736221] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:55.996 [2024-11-15 11:28:38.736359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.996 [2024-11-15 11:28:38.736405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.996 [2024-11-15 11:28:38.736419] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.996 "name": "raid_bdev1", 00:16:55.996 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:16:55.996 "strip_size_kb": 64, 00:16:55.996 "state": "online", 00:16:55.996 "raid_level": "raid5f", 00:16:55.996 "superblock": false, 00:16:55.996 "num_base_bdevs": 3, 00:16:55.996 "num_base_bdevs_discovered": 2, 00:16:55.996 "num_base_bdevs_operational": 2, 00:16:55.996 "base_bdevs_list": [ 00:16:55.996 { 00:16:55.996 "name": null, 00:16:55.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.996 "is_configured": false, 00:16:55.996 "data_offset": 0, 00:16:55.996 "data_size": 65536 00:16:55.996 }, 00:16:55.996 { 00:16:55.996 "name": "BaseBdev2", 00:16:55.996 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:16:55.996 "is_configured": true, 00:16:55.996 "data_offset": 0, 00:16:55.996 "data_size": 65536 00:16:55.996 }, 00:16:55.996 { 00:16:55.996 "name": "BaseBdev3", 00:16:55.996 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:16:55.996 "is_configured": true, 00:16:55.996 "data_offset": 0, 00:16:55.996 "data_size": 65536 00:16:55.996 } 00:16:55.996 ] 00:16:55.996 }' 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.996 11:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.564 "name": "raid_bdev1", 00:16:56.564 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:16:56.564 "strip_size_kb": 64, 00:16:56.564 "state": "online", 00:16:56.564 "raid_level": "raid5f", 00:16:56.564 "superblock": false, 00:16:56.564 "num_base_bdevs": 3, 00:16:56.564 "num_base_bdevs_discovered": 2, 00:16:56.564 "num_base_bdevs_operational": 2, 00:16:56.564 "base_bdevs_list": [ 00:16:56.564 { 00:16:56.564 "name": null, 00:16:56.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.564 "is_configured": false, 00:16:56.564 "data_offset": 0, 00:16:56.564 "data_size": 65536 00:16:56.564 }, 00:16:56.564 { 00:16:56.564 "name": "BaseBdev2", 00:16:56.564 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:16:56.564 "is_configured": true, 00:16:56.564 "data_offset": 0, 00:16:56.564 "data_size": 65536 00:16:56.564 }, 00:16:56.564 { 00:16:56.564 "name": "BaseBdev3", 00:16:56.564 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:16:56.564 "is_configured": true, 00:16:56.564 "data_offset": 0, 00:16:56.564 "data_size": 65536 00:16:56.564 } 00:16:56.564 ] 00:16:56.564 }' 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.564 [2024-11-15 11:28:39.453292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.564 [2024-11-15 11:28:39.470052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.564 11:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:56.564 [2024-11-15 11:28:39.478121] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.940 "name": "raid_bdev1", 00:16:57.940 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:16:57.940 "strip_size_kb": 64, 00:16:57.940 "state": "online", 00:16:57.940 "raid_level": "raid5f", 00:16:57.940 "superblock": false, 00:16:57.940 "num_base_bdevs": 3, 00:16:57.940 "num_base_bdevs_discovered": 3, 00:16:57.940 "num_base_bdevs_operational": 3, 00:16:57.940 "process": { 00:16:57.940 "type": "rebuild", 00:16:57.940 "target": "spare", 00:16:57.940 "progress": { 00:16:57.940 "blocks": 18432, 00:16:57.940 "percent": 14 00:16:57.940 } 00:16:57.940 }, 00:16:57.940 "base_bdevs_list": [ 00:16:57.940 { 00:16:57.940 "name": "spare", 00:16:57.940 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:16:57.940 "is_configured": true, 00:16:57.940 "data_offset": 0, 00:16:57.940 "data_size": 65536 00:16:57.940 }, 00:16:57.940 { 00:16:57.940 "name": "BaseBdev2", 00:16:57.940 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:16:57.940 "is_configured": true, 00:16:57.940 "data_offset": 0, 00:16:57.940 "data_size": 65536 00:16:57.940 }, 00:16:57.940 { 00:16:57.940 "name": "BaseBdev3", 00:16:57.940 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:16:57.940 "is_configured": true, 00:16:57.940 "data_offset": 0, 00:16:57.940 "data_size": 65536 00:16:57.940 } 00:16:57.940 ] 00:16:57.940 }' 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=597 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.940 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.940 "name": "raid_bdev1", 00:16:57.940 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:16:57.940 "strip_size_kb": 64, 00:16:57.940 "state": "online", 00:16:57.940 "raid_level": "raid5f", 00:16:57.940 "superblock": false, 00:16:57.940 "num_base_bdevs": 3, 00:16:57.940 "num_base_bdevs_discovered": 3, 00:16:57.940 "num_base_bdevs_operational": 3, 00:16:57.940 "process": { 00:16:57.940 "type": "rebuild", 00:16:57.940 "target": "spare", 00:16:57.940 "progress": { 00:16:57.940 "blocks": 22528, 00:16:57.940 "percent": 17 00:16:57.940 } 00:16:57.940 }, 00:16:57.940 "base_bdevs_list": [ 00:16:57.940 { 00:16:57.940 "name": "spare", 00:16:57.940 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:16:57.940 "is_configured": true, 00:16:57.940 "data_offset": 0, 00:16:57.940 "data_size": 65536 00:16:57.940 }, 00:16:57.940 { 00:16:57.940 "name": "BaseBdev2", 00:16:57.940 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:16:57.940 "is_configured": true, 00:16:57.940 "data_offset": 0, 00:16:57.940 "data_size": 65536 00:16:57.940 }, 00:16:57.940 { 00:16:57.940 "name": "BaseBdev3", 00:16:57.940 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:16:57.940 "is_configured": true, 00:16:57.940 "data_offset": 0, 00:16:57.940 "data_size": 65536 00:16:57.940 } 00:16:57.940 ] 00:16:57.940 }' 00:16:57.941 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.941 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.941 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.941 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.941 11:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.876 11:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.135 11:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.135 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.135 "name": "raid_bdev1", 00:16:59.135 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:16:59.135 "strip_size_kb": 64, 00:16:59.135 "state": "online", 00:16:59.135 "raid_level": "raid5f", 00:16:59.135 "superblock": false, 00:16:59.135 "num_base_bdevs": 3, 00:16:59.135 "num_base_bdevs_discovered": 3, 00:16:59.135 "num_base_bdevs_operational": 3, 00:16:59.135 "process": { 00:16:59.135 "type": "rebuild", 00:16:59.135 "target": "spare", 00:16:59.135 "progress": { 00:16:59.135 "blocks": 47104, 00:16:59.135 "percent": 35 00:16:59.135 } 00:16:59.135 }, 00:16:59.135 "base_bdevs_list": [ 00:16:59.135 { 00:16:59.135 "name": "spare", 00:16:59.135 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:16:59.135 "is_configured": true, 00:16:59.135 "data_offset": 0, 00:16:59.135 "data_size": 65536 00:16:59.135 }, 00:16:59.135 { 00:16:59.135 "name": "BaseBdev2", 00:16:59.135 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:16:59.135 "is_configured": true, 00:16:59.135 "data_offset": 0, 00:16:59.135 "data_size": 65536 00:16:59.135 }, 00:16:59.135 { 00:16:59.135 "name": "BaseBdev3", 00:16:59.135 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:16:59.135 "is_configured": true, 00:16:59.135 "data_offset": 0, 00:16:59.135 "data_size": 65536 00:16:59.135 } 00:16:59.135 ] 00:16:59.135 }' 00:16:59.135 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.135 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.135 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.135 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.135 11:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.072 11:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.072 11:28:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.331 11:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.331 "name": "raid_bdev1", 00:17:00.331 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:17:00.331 "strip_size_kb": 64, 00:17:00.331 "state": "online", 00:17:00.331 "raid_level": "raid5f", 00:17:00.331 "superblock": false, 00:17:00.331 "num_base_bdevs": 3, 00:17:00.331 "num_base_bdevs_discovered": 3, 00:17:00.331 "num_base_bdevs_operational": 3, 00:17:00.331 "process": { 00:17:00.331 "type": "rebuild", 00:17:00.331 "target": "spare", 00:17:00.331 "progress": { 00:17:00.331 "blocks": 69632, 00:17:00.331 "percent": 53 00:17:00.331 } 00:17:00.331 }, 00:17:00.331 "base_bdevs_list": [ 00:17:00.331 { 00:17:00.331 "name": "spare", 00:17:00.331 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:17:00.331 "is_configured": true, 00:17:00.331 "data_offset": 0, 00:17:00.331 "data_size": 65536 00:17:00.331 }, 00:17:00.331 { 00:17:00.331 "name": "BaseBdev2", 00:17:00.331 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:17:00.331 "is_configured": true, 00:17:00.331 "data_offset": 0, 00:17:00.331 "data_size": 65536 00:17:00.331 }, 00:17:00.331 { 00:17:00.331 "name": "BaseBdev3", 00:17:00.331 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:17:00.331 "is_configured": true, 00:17:00.331 "data_offset": 0, 00:17:00.331 "data_size": 65536 00:17:00.331 } 00:17:00.331 ] 00:17:00.331 }' 00:17:00.331 11:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.331 11:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.331 11:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.331 11:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.331 11:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.267 "name": "raid_bdev1", 00:17:01.267 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:17:01.267 "strip_size_kb": 64, 00:17:01.267 "state": "online", 00:17:01.267 "raid_level": "raid5f", 00:17:01.267 "superblock": false, 00:17:01.267 "num_base_bdevs": 3, 00:17:01.267 "num_base_bdevs_discovered": 3, 00:17:01.267 "num_base_bdevs_operational": 3, 00:17:01.267 "process": { 00:17:01.267 "type": "rebuild", 00:17:01.267 "target": "spare", 00:17:01.267 "progress": { 00:17:01.267 "blocks": 94208, 00:17:01.267 "percent": 71 00:17:01.267 } 00:17:01.267 }, 00:17:01.267 "base_bdevs_list": [ 00:17:01.267 { 00:17:01.267 "name": "spare", 00:17:01.267 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:17:01.267 "is_configured": true, 00:17:01.267 "data_offset": 0, 00:17:01.267 "data_size": 65536 00:17:01.267 }, 00:17:01.267 { 00:17:01.267 "name": "BaseBdev2", 00:17:01.267 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:17:01.267 "is_configured": true, 00:17:01.267 "data_offset": 0, 00:17:01.267 "data_size": 65536 00:17:01.267 }, 00:17:01.267 { 00:17:01.267 "name": "BaseBdev3", 00:17:01.267 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:17:01.267 "is_configured": true, 00:17:01.267 "data_offset": 0, 00:17:01.267 "data_size": 65536 00:17:01.267 } 00:17:01.267 ] 00:17:01.267 }' 00:17:01.267 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.526 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.526 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.526 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.526 11:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.462 11:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.463 11:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.463 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.463 "name": "raid_bdev1", 00:17:02.463 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:17:02.463 "strip_size_kb": 64, 00:17:02.463 "state": "online", 00:17:02.463 "raid_level": "raid5f", 00:17:02.463 "superblock": false, 00:17:02.463 "num_base_bdevs": 3, 00:17:02.463 "num_base_bdevs_discovered": 3, 00:17:02.463 "num_base_bdevs_operational": 3, 00:17:02.463 "process": { 00:17:02.463 "type": "rebuild", 00:17:02.463 "target": "spare", 00:17:02.463 "progress": { 00:17:02.463 "blocks": 116736, 00:17:02.463 "percent": 89 00:17:02.463 } 00:17:02.463 }, 00:17:02.463 "base_bdevs_list": [ 00:17:02.463 { 00:17:02.463 "name": "spare", 00:17:02.463 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:17:02.463 "is_configured": true, 00:17:02.463 "data_offset": 0, 00:17:02.463 "data_size": 65536 00:17:02.463 }, 00:17:02.463 { 00:17:02.463 "name": "BaseBdev2", 00:17:02.463 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:17:02.463 "is_configured": true, 00:17:02.463 "data_offset": 0, 00:17:02.463 "data_size": 65536 00:17:02.463 }, 00:17:02.463 { 00:17:02.463 "name": "BaseBdev3", 00:17:02.463 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:17:02.463 "is_configured": true, 00:17:02.463 "data_offset": 0, 00:17:02.463 "data_size": 65536 00:17:02.463 } 00:17:02.463 ] 00:17:02.463 }' 00:17:02.463 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.463 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.463 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.722 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.722 11:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.289 [2024-11-15 11:28:45.965597] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:03.289 [2024-11-15 11:28:45.965769] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:03.289 [2024-11-15 11:28:45.965855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.548 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.828 "name": "raid_bdev1", 00:17:03.828 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:17:03.828 "strip_size_kb": 64, 00:17:03.828 "state": "online", 00:17:03.828 "raid_level": "raid5f", 00:17:03.828 "superblock": false, 00:17:03.828 "num_base_bdevs": 3, 00:17:03.828 "num_base_bdevs_discovered": 3, 00:17:03.828 "num_base_bdevs_operational": 3, 00:17:03.828 "base_bdevs_list": [ 00:17:03.828 { 00:17:03.828 "name": "spare", 00:17:03.828 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:17:03.828 "is_configured": true, 00:17:03.828 "data_offset": 0, 00:17:03.828 "data_size": 65536 00:17:03.828 }, 00:17:03.828 { 00:17:03.828 "name": "BaseBdev2", 00:17:03.828 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:17:03.828 "is_configured": true, 00:17:03.828 "data_offset": 0, 00:17:03.828 "data_size": 65536 00:17:03.828 }, 00:17:03.828 { 00:17:03.828 "name": "BaseBdev3", 00:17:03.828 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:17:03.828 "is_configured": true, 00:17:03.828 "data_offset": 0, 00:17:03.828 "data_size": 65536 00:17:03.828 } 00:17:03.828 ] 00:17:03.828 }' 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.828 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.828 "name": "raid_bdev1", 00:17:03.828 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:17:03.828 "strip_size_kb": 64, 00:17:03.828 "state": "online", 00:17:03.828 "raid_level": "raid5f", 00:17:03.828 "superblock": false, 00:17:03.828 "num_base_bdevs": 3, 00:17:03.828 "num_base_bdevs_discovered": 3, 00:17:03.828 "num_base_bdevs_operational": 3, 00:17:03.828 "base_bdevs_list": [ 00:17:03.828 { 00:17:03.828 "name": "spare", 00:17:03.828 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:17:03.828 "is_configured": true, 00:17:03.828 "data_offset": 0, 00:17:03.828 "data_size": 65536 00:17:03.828 }, 00:17:03.828 { 00:17:03.828 "name": "BaseBdev2", 00:17:03.828 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:17:03.829 "is_configured": true, 00:17:03.829 "data_offset": 0, 00:17:03.829 "data_size": 65536 00:17:03.829 }, 00:17:03.829 { 00:17:03.829 "name": "BaseBdev3", 00:17:03.829 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:17:03.829 "is_configured": true, 00:17:03.829 "data_offset": 0, 00:17:03.829 "data_size": 65536 00:17:03.829 } 00:17:03.829 ] 00:17:03.829 }' 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.829 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.088 "name": "raid_bdev1", 00:17:04.088 "uuid": "f77edc57-ccf5-43c8-9228-2b515af21775", 00:17:04.088 "strip_size_kb": 64, 00:17:04.088 "state": "online", 00:17:04.088 "raid_level": "raid5f", 00:17:04.088 "superblock": false, 00:17:04.088 "num_base_bdevs": 3, 00:17:04.088 "num_base_bdevs_discovered": 3, 00:17:04.088 "num_base_bdevs_operational": 3, 00:17:04.088 "base_bdevs_list": [ 00:17:04.088 { 00:17:04.088 "name": "spare", 00:17:04.088 "uuid": "578c9a8f-b5d8-5471-b66b-8de563eef5cb", 00:17:04.088 "is_configured": true, 00:17:04.088 "data_offset": 0, 00:17:04.088 "data_size": 65536 00:17:04.088 }, 00:17:04.088 { 00:17:04.088 "name": "BaseBdev2", 00:17:04.088 "uuid": "d7557856-e0e4-5920-849b-8a7c78358911", 00:17:04.088 "is_configured": true, 00:17:04.088 "data_offset": 0, 00:17:04.088 "data_size": 65536 00:17:04.088 }, 00:17:04.088 { 00:17:04.088 "name": "BaseBdev3", 00:17:04.088 "uuid": "c43e0702-eb0a-5c53-b454-5d134768e532", 00:17:04.088 "is_configured": true, 00:17:04.088 "data_offset": 0, 00:17:04.088 "data_size": 65536 00:17:04.088 } 00:17:04.088 ] 00:17:04.088 }' 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.088 11:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.657 [2024-11-15 11:28:47.333860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.657 [2024-11-15 11:28:47.333898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.657 [2024-11-15 11:28:47.334013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.657 [2024-11-15 11:28:47.334169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.657 [2024-11-15 11:28:47.334197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.657 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.658 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:04.916 /dev/nbd0 00:17:04.916 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.916 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.916 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:04.916 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:04.916 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.917 1+0 records in 00:17:04.917 1+0 records out 00:17:04.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288757 s, 14.2 MB/s 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.917 11:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:05.175 /dev/nbd1 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:05.175 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.175 1+0 records in 00:17:05.175 1+0 records out 00:17:05.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033035 s, 12.4 MB/s 00:17:05.176 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.176 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:05.176 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.176 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:05.176 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:05.176 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.176 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:05.176 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:05.435 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:05.435 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.435 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:05.435 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.435 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:05.435 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.435 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.694 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81814 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81814 ']' 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81814 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81814 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.954 killing process with pid 81814 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81814' 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81814 00:17:05.954 Received shutdown signal, test time was about 60.000000 seconds 00:17:05.954 00:17:05.954 Latency(us) 00:17:05.954 [2024-11-15T11:28:48.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.954 [2024-11-15T11:28:48.904Z] =================================================================================================================== 00:17:05.954 [2024-11-15T11:28:48.904Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.954 [2024-11-15 11:28:48.846591] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.954 11:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81814 00:17:06.523 [2024-11-15 11:28:49.193354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:07.461 ************************************ 00:17:07.461 END TEST raid5f_rebuild_test 00:17:07.461 ************************************ 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:07.461 00:17:07.461 real 0m16.330s 00:17:07.461 user 0m20.757s 00:17:07.461 sys 0m2.082s 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.461 11:28:50 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:07.461 11:28:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:07.461 11:28:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:07.461 11:28:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.461 ************************************ 00:17:07.461 START TEST raid5f_rebuild_test_sb 00:17:07.461 ************************************ 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.461 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82267 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82267 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82267 ']' 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:07.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:07.462 11:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.722 [2024-11-15 11:28:50.412231] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:17:07.722 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:07.722 Zero copy mechanism will not be used. 00:17:07.722 [2024-11-15 11:28:50.412458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82267 ] 00:17:07.722 [2024-11-15 11:28:50.594143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.981 [2024-11-15 11:28:50.732867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.240 [2024-11-15 11:28:50.941646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.240 [2024-11-15 11:28:50.941705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.498 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:08.498 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.499 BaseBdev1_malloc 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.499 [2024-11-15 11:28:51.383830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:08.499 [2024-11-15 11:28:51.383919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.499 [2024-11-15 11:28:51.383955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:08.499 [2024-11-15 11:28:51.383974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.499 [2024-11-15 11:28:51.387075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.499 [2024-11-15 11:28:51.387139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.499 BaseBdev1 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.499 BaseBdev2_malloc 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.499 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.499 [2024-11-15 11:28:51.445367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:08.499 [2024-11-15 11:28:51.445457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.499 [2024-11-15 11:28:51.445494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:08.499 [2024-11-15 11:28:51.445514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.758 [2024-11-15 11:28:51.448753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.758 [2024-11-15 11:28:51.448815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:08.758 BaseBdev2 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 BaseBdev3_malloc 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 [2024-11-15 11:28:51.519635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:08.758 [2024-11-15 11:28:51.519727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.758 [2024-11-15 11:28:51.519760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:08.758 [2024-11-15 11:28:51.519780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.758 [2024-11-15 11:28:51.522852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.758 [2024-11-15 11:28:51.522915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:08.758 BaseBdev3 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 spare_malloc 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 spare_delay 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 [2024-11-15 11:28:51.585253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.758 [2024-11-15 11:28:51.585357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.758 [2024-11-15 11:28:51.585386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:08.758 [2024-11-15 11:28:51.585405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.758 [2024-11-15 11:28:51.588564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.758 [2024-11-15 11:28:51.588631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.758 spare 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.758 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.759 [2024-11-15 11:28:51.597515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.759 [2024-11-15 11:28:51.600098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.759 [2024-11-15 11:28:51.600198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.759 [2024-11-15 11:28:51.600504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:08.759 [2024-11-15 11:28:51.600529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:08.759 [2024-11-15 11:28:51.600877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.759 [2024-11-15 11:28:51.605667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:08.759 [2024-11-15 11:28:51.605699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:08.759 [2024-11-15 11:28:51.605908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.759 "name": "raid_bdev1", 00:17:08.759 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:08.759 "strip_size_kb": 64, 00:17:08.759 "state": "online", 00:17:08.759 "raid_level": "raid5f", 00:17:08.759 "superblock": true, 00:17:08.759 "num_base_bdevs": 3, 00:17:08.759 "num_base_bdevs_discovered": 3, 00:17:08.759 "num_base_bdevs_operational": 3, 00:17:08.759 "base_bdevs_list": [ 00:17:08.759 { 00:17:08.759 "name": "BaseBdev1", 00:17:08.759 "uuid": "f2bd1b14-49aa-5f3b-b49c-d5b3d1ca3605", 00:17:08.759 "is_configured": true, 00:17:08.759 "data_offset": 2048, 00:17:08.759 "data_size": 63488 00:17:08.759 }, 00:17:08.759 { 00:17:08.759 "name": "BaseBdev2", 00:17:08.759 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:08.759 "is_configured": true, 00:17:08.759 "data_offset": 2048, 00:17:08.759 "data_size": 63488 00:17:08.759 }, 00:17:08.759 { 00:17:08.759 "name": "BaseBdev3", 00:17:08.759 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:08.759 "is_configured": true, 00:17:08.759 "data_offset": 2048, 00:17:08.759 "data_size": 63488 00:17:08.759 } 00:17:08.759 ] 00:17:08.759 }' 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.759 11:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.327 [2024-11-15 11:28:52.172299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:09.327 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:09.909 [2024-11-15 11:28:52.560207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:09.909 /dev/nbd0 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:09.909 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.910 1+0 records in 00:17:09.910 1+0 records out 00:17:09.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783909 s, 5.2 MB/s 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:09.910 11:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:10.168 496+0 records in 00:17:10.168 496+0 records out 00:17:10.168 65011712 bytes (65 MB, 62 MiB) copied, 0.438224 s, 148 MB/s 00:17:10.168 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:10.168 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.168 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:10.168 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:10.168 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:10.168 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.168 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:10.427 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:10.427 [2024-11-15 11:28:53.370448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.427 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:10.427 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:10.427 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:10.427 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:10.427 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:10.686 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:10.686 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:10.686 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:10.686 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.686 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.686 [2024-11-15 11:28:53.385438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:10.686 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.687 "name": "raid_bdev1", 00:17:10.687 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:10.687 "strip_size_kb": 64, 00:17:10.687 "state": "online", 00:17:10.687 "raid_level": "raid5f", 00:17:10.687 "superblock": true, 00:17:10.687 "num_base_bdevs": 3, 00:17:10.687 "num_base_bdevs_discovered": 2, 00:17:10.687 "num_base_bdevs_operational": 2, 00:17:10.687 "base_bdevs_list": [ 00:17:10.687 { 00:17:10.687 "name": null, 00:17:10.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.687 "is_configured": false, 00:17:10.687 "data_offset": 0, 00:17:10.687 "data_size": 63488 00:17:10.687 }, 00:17:10.687 { 00:17:10.687 "name": "BaseBdev2", 00:17:10.687 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:10.687 "is_configured": true, 00:17:10.687 "data_offset": 2048, 00:17:10.687 "data_size": 63488 00:17:10.687 }, 00:17:10.687 { 00:17:10.687 "name": "BaseBdev3", 00:17:10.687 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:10.687 "is_configured": true, 00:17:10.687 "data_offset": 2048, 00:17:10.687 "data_size": 63488 00:17:10.687 } 00:17:10.687 ] 00:17:10.687 }' 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.687 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.945 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:10.945 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.945 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.945 [2024-11-15 11:28:53.889711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.203 [2024-11-15 11:28:53.906256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:11.203 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.203 11:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:11.203 [2024-11-15 11:28:53.913690] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.140 "name": "raid_bdev1", 00:17:12.140 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:12.140 "strip_size_kb": 64, 00:17:12.140 "state": "online", 00:17:12.140 "raid_level": "raid5f", 00:17:12.140 "superblock": true, 00:17:12.140 "num_base_bdevs": 3, 00:17:12.140 "num_base_bdevs_discovered": 3, 00:17:12.140 "num_base_bdevs_operational": 3, 00:17:12.140 "process": { 00:17:12.140 "type": "rebuild", 00:17:12.140 "target": "spare", 00:17:12.140 "progress": { 00:17:12.140 "blocks": 18432, 00:17:12.140 "percent": 14 00:17:12.140 } 00:17:12.140 }, 00:17:12.140 "base_bdevs_list": [ 00:17:12.140 { 00:17:12.140 "name": "spare", 00:17:12.140 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:12.140 "is_configured": true, 00:17:12.140 "data_offset": 2048, 00:17:12.140 "data_size": 63488 00:17:12.140 }, 00:17:12.140 { 00:17:12.140 "name": "BaseBdev2", 00:17:12.140 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:12.140 "is_configured": true, 00:17:12.140 "data_offset": 2048, 00:17:12.140 "data_size": 63488 00:17:12.140 }, 00:17:12.140 { 00:17:12.140 "name": "BaseBdev3", 00:17:12.140 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:12.140 "is_configured": true, 00:17:12.140 "data_offset": 2048, 00:17:12.140 "data_size": 63488 00:17:12.140 } 00:17:12.140 ] 00:17:12.140 }' 00:17:12.140 11:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.140 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.140 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.140 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.140 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:12.140 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.140 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.399 [2024-11-15 11:28:55.092575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.399 [2024-11-15 11:28:55.131761] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:12.399 [2024-11-15 11:28:55.131891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.399 [2024-11-15 11:28:55.131924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.399 [2024-11-15 11:28:55.131937] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.399 "name": "raid_bdev1", 00:17:12.399 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:12.399 "strip_size_kb": 64, 00:17:12.399 "state": "online", 00:17:12.399 "raid_level": "raid5f", 00:17:12.399 "superblock": true, 00:17:12.399 "num_base_bdevs": 3, 00:17:12.399 "num_base_bdevs_discovered": 2, 00:17:12.399 "num_base_bdevs_operational": 2, 00:17:12.399 "base_bdevs_list": [ 00:17:12.399 { 00:17:12.399 "name": null, 00:17:12.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.399 "is_configured": false, 00:17:12.399 "data_offset": 0, 00:17:12.399 "data_size": 63488 00:17:12.399 }, 00:17:12.399 { 00:17:12.399 "name": "BaseBdev2", 00:17:12.399 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:12.399 "is_configured": true, 00:17:12.399 "data_offset": 2048, 00:17:12.399 "data_size": 63488 00:17:12.399 }, 00:17:12.399 { 00:17:12.399 "name": "BaseBdev3", 00:17:12.399 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:12.399 "is_configured": true, 00:17:12.399 "data_offset": 2048, 00:17:12.399 "data_size": 63488 00:17:12.399 } 00:17:12.399 ] 00:17:12.399 }' 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.399 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.967 "name": "raid_bdev1", 00:17:12.967 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:12.967 "strip_size_kb": 64, 00:17:12.967 "state": "online", 00:17:12.967 "raid_level": "raid5f", 00:17:12.967 "superblock": true, 00:17:12.967 "num_base_bdevs": 3, 00:17:12.967 "num_base_bdevs_discovered": 2, 00:17:12.967 "num_base_bdevs_operational": 2, 00:17:12.967 "base_bdevs_list": [ 00:17:12.967 { 00:17:12.967 "name": null, 00:17:12.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.967 "is_configured": false, 00:17:12.967 "data_offset": 0, 00:17:12.967 "data_size": 63488 00:17:12.967 }, 00:17:12.967 { 00:17:12.967 "name": "BaseBdev2", 00:17:12.967 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:12.967 "is_configured": true, 00:17:12.967 "data_offset": 2048, 00:17:12.967 "data_size": 63488 00:17:12.967 }, 00:17:12.967 { 00:17:12.967 "name": "BaseBdev3", 00:17:12.967 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:12.967 "is_configured": true, 00:17:12.967 "data_offset": 2048, 00:17:12.967 "data_size": 63488 00:17:12.967 } 00:17:12.967 ] 00:17:12.967 }' 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.967 [2024-11-15 11:28:55.839071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.967 [2024-11-15 11:28:55.853669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.967 11:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:12.967 [2024-11-15 11:28:55.860596] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.344 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.344 "name": "raid_bdev1", 00:17:14.344 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:14.344 "strip_size_kb": 64, 00:17:14.344 "state": "online", 00:17:14.344 "raid_level": "raid5f", 00:17:14.344 "superblock": true, 00:17:14.344 "num_base_bdevs": 3, 00:17:14.345 "num_base_bdevs_discovered": 3, 00:17:14.345 "num_base_bdevs_operational": 3, 00:17:14.345 "process": { 00:17:14.345 "type": "rebuild", 00:17:14.345 "target": "spare", 00:17:14.345 "progress": { 00:17:14.345 "blocks": 18432, 00:17:14.345 "percent": 14 00:17:14.345 } 00:17:14.345 }, 00:17:14.345 "base_bdevs_list": [ 00:17:14.345 { 00:17:14.345 "name": "spare", 00:17:14.345 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:14.345 "is_configured": true, 00:17:14.345 "data_offset": 2048, 00:17:14.345 "data_size": 63488 00:17:14.345 }, 00:17:14.345 { 00:17:14.345 "name": "BaseBdev2", 00:17:14.345 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:14.345 "is_configured": true, 00:17:14.345 "data_offset": 2048, 00:17:14.345 "data_size": 63488 00:17:14.345 }, 00:17:14.345 { 00:17:14.345 "name": "BaseBdev3", 00:17:14.345 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:14.345 "is_configured": true, 00:17:14.345 "data_offset": 2048, 00:17:14.345 "data_size": 63488 00:17:14.345 } 00:17:14.345 ] 00:17:14.345 }' 00:17:14.345 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.345 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.345 11:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:14.345 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=614 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.345 "name": "raid_bdev1", 00:17:14.345 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:14.345 "strip_size_kb": 64, 00:17:14.345 "state": "online", 00:17:14.345 "raid_level": "raid5f", 00:17:14.345 "superblock": true, 00:17:14.345 "num_base_bdevs": 3, 00:17:14.345 "num_base_bdevs_discovered": 3, 00:17:14.345 "num_base_bdevs_operational": 3, 00:17:14.345 "process": { 00:17:14.345 "type": "rebuild", 00:17:14.345 "target": "spare", 00:17:14.345 "progress": { 00:17:14.345 "blocks": 22528, 00:17:14.345 "percent": 17 00:17:14.345 } 00:17:14.345 }, 00:17:14.345 "base_bdevs_list": [ 00:17:14.345 { 00:17:14.345 "name": "spare", 00:17:14.345 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:14.345 "is_configured": true, 00:17:14.345 "data_offset": 2048, 00:17:14.345 "data_size": 63488 00:17:14.345 }, 00:17:14.345 { 00:17:14.345 "name": "BaseBdev2", 00:17:14.345 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:14.345 "is_configured": true, 00:17:14.345 "data_offset": 2048, 00:17:14.345 "data_size": 63488 00:17:14.345 }, 00:17:14.345 { 00:17:14.345 "name": "BaseBdev3", 00:17:14.345 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:14.345 "is_configured": true, 00:17:14.345 "data_offset": 2048, 00:17:14.345 "data_size": 63488 00:17:14.345 } 00:17:14.345 ] 00:17:14.345 }' 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.345 11:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.281 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.282 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.541 "name": "raid_bdev1", 00:17:15.541 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:15.541 "strip_size_kb": 64, 00:17:15.541 "state": "online", 00:17:15.541 "raid_level": "raid5f", 00:17:15.541 "superblock": true, 00:17:15.541 "num_base_bdevs": 3, 00:17:15.541 "num_base_bdevs_discovered": 3, 00:17:15.541 "num_base_bdevs_operational": 3, 00:17:15.541 "process": { 00:17:15.541 "type": "rebuild", 00:17:15.541 "target": "spare", 00:17:15.541 "progress": { 00:17:15.541 "blocks": 45056, 00:17:15.541 "percent": 35 00:17:15.541 } 00:17:15.541 }, 00:17:15.541 "base_bdevs_list": [ 00:17:15.541 { 00:17:15.541 "name": "spare", 00:17:15.541 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:15.541 "is_configured": true, 00:17:15.541 "data_offset": 2048, 00:17:15.541 "data_size": 63488 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "name": "BaseBdev2", 00:17:15.541 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:15.541 "is_configured": true, 00:17:15.541 "data_offset": 2048, 00:17:15.541 "data_size": 63488 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "name": "BaseBdev3", 00:17:15.541 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:15.541 "is_configured": true, 00:17:15.541 "data_offset": 2048, 00:17:15.541 "data_size": 63488 00:17:15.541 } 00:17:15.541 ] 00:17:15.541 }' 00:17:15.541 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.541 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.541 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.541 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.541 11:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.523 "name": "raid_bdev1", 00:17:16.523 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:16.523 "strip_size_kb": 64, 00:17:16.523 "state": "online", 00:17:16.523 "raid_level": "raid5f", 00:17:16.523 "superblock": true, 00:17:16.523 "num_base_bdevs": 3, 00:17:16.523 "num_base_bdevs_discovered": 3, 00:17:16.523 "num_base_bdevs_operational": 3, 00:17:16.523 "process": { 00:17:16.523 "type": "rebuild", 00:17:16.523 "target": "spare", 00:17:16.523 "progress": { 00:17:16.523 "blocks": 69632, 00:17:16.523 "percent": 54 00:17:16.523 } 00:17:16.523 }, 00:17:16.523 "base_bdevs_list": [ 00:17:16.523 { 00:17:16.523 "name": "spare", 00:17:16.523 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:16.523 "is_configured": true, 00:17:16.523 "data_offset": 2048, 00:17:16.523 "data_size": 63488 00:17:16.523 }, 00:17:16.523 { 00:17:16.523 "name": "BaseBdev2", 00:17:16.523 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:16.523 "is_configured": true, 00:17:16.523 "data_offset": 2048, 00:17:16.523 "data_size": 63488 00:17:16.523 }, 00:17:16.523 { 00:17:16.523 "name": "BaseBdev3", 00:17:16.523 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:16.523 "is_configured": true, 00:17:16.523 "data_offset": 2048, 00:17:16.523 "data_size": 63488 00:17:16.523 } 00:17:16.523 ] 00:17:16.523 }' 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.523 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.781 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.781 11:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.717 "name": "raid_bdev1", 00:17:17.717 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:17.717 "strip_size_kb": 64, 00:17:17.717 "state": "online", 00:17:17.717 "raid_level": "raid5f", 00:17:17.717 "superblock": true, 00:17:17.717 "num_base_bdevs": 3, 00:17:17.717 "num_base_bdevs_discovered": 3, 00:17:17.717 "num_base_bdevs_operational": 3, 00:17:17.717 "process": { 00:17:17.717 "type": "rebuild", 00:17:17.717 "target": "spare", 00:17:17.717 "progress": { 00:17:17.717 "blocks": 94208, 00:17:17.717 "percent": 74 00:17:17.717 } 00:17:17.717 }, 00:17:17.717 "base_bdevs_list": [ 00:17:17.717 { 00:17:17.717 "name": "spare", 00:17:17.717 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:17.717 "is_configured": true, 00:17:17.717 "data_offset": 2048, 00:17:17.717 "data_size": 63488 00:17:17.717 }, 00:17:17.717 { 00:17:17.717 "name": "BaseBdev2", 00:17:17.717 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:17.717 "is_configured": true, 00:17:17.717 "data_offset": 2048, 00:17:17.717 "data_size": 63488 00:17:17.717 }, 00:17:17.717 { 00:17:17.717 "name": "BaseBdev3", 00:17:17.717 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:17.717 "is_configured": true, 00:17:17.717 "data_offset": 2048, 00:17:17.717 "data_size": 63488 00:17:17.717 } 00:17:17.717 ] 00:17:17.717 }' 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.717 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.977 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.977 11:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.913 "name": "raid_bdev1", 00:17:18.913 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:18.913 "strip_size_kb": 64, 00:17:18.913 "state": "online", 00:17:18.913 "raid_level": "raid5f", 00:17:18.913 "superblock": true, 00:17:18.913 "num_base_bdevs": 3, 00:17:18.913 "num_base_bdevs_discovered": 3, 00:17:18.913 "num_base_bdevs_operational": 3, 00:17:18.913 "process": { 00:17:18.913 "type": "rebuild", 00:17:18.913 "target": "spare", 00:17:18.913 "progress": { 00:17:18.913 "blocks": 116736, 00:17:18.913 "percent": 91 00:17:18.913 } 00:17:18.913 }, 00:17:18.913 "base_bdevs_list": [ 00:17:18.913 { 00:17:18.913 "name": "spare", 00:17:18.913 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:18.913 "is_configured": true, 00:17:18.913 "data_offset": 2048, 00:17:18.913 "data_size": 63488 00:17:18.913 }, 00:17:18.913 { 00:17:18.913 "name": "BaseBdev2", 00:17:18.913 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:18.913 "is_configured": true, 00:17:18.913 "data_offset": 2048, 00:17:18.913 "data_size": 63488 00:17:18.913 }, 00:17:18.913 { 00:17:18.913 "name": "BaseBdev3", 00:17:18.913 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:18.913 "is_configured": true, 00:17:18.913 "data_offset": 2048, 00:17:18.913 "data_size": 63488 00:17:18.913 } 00:17:18.913 ] 00:17:18.913 }' 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.913 11:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.481 [2024-11-15 11:29:02.144411] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:19.481 [2024-11-15 11:29:02.144519] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:19.481 [2024-11-15 11:29:02.144736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.051 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.051 "name": "raid_bdev1", 00:17:20.051 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:20.051 "strip_size_kb": 64, 00:17:20.051 "state": "online", 00:17:20.051 "raid_level": "raid5f", 00:17:20.051 "superblock": true, 00:17:20.051 "num_base_bdevs": 3, 00:17:20.051 "num_base_bdevs_discovered": 3, 00:17:20.052 "num_base_bdevs_operational": 3, 00:17:20.052 "base_bdevs_list": [ 00:17:20.052 { 00:17:20.052 "name": "spare", 00:17:20.052 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:20.052 "is_configured": true, 00:17:20.052 "data_offset": 2048, 00:17:20.052 "data_size": 63488 00:17:20.052 }, 00:17:20.052 { 00:17:20.052 "name": "BaseBdev2", 00:17:20.052 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:20.052 "is_configured": true, 00:17:20.052 "data_offset": 2048, 00:17:20.052 "data_size": 63488 00:17:20.052 }, 00:17:20.052 { 00:17:20.052 "name": "BaseBdev3", 00:17:20.052 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:20.052 "is_configured": true, 00:17:20.052 "data_offset": 2048, 00:17:20.052 "data_size": 63488 00:17:20.052 } 00:17:20.052 ] 00:17:20.052 }' 00:17:20.052 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.052 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:20.052 11:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.310 "name": "raid_bdev1", 00:17:20.310 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:20.310 "strip_size_kb": 64, 00:17:20.310 "state": "online", 00:17:20.310 "raid_level": "raid5f", 00:17:20.310 "superblock": true, 00:17:20.310 "num_base_bdevs": 3, 00:17:20.310 "num_base_bdevs_discovered": 3, 00:17:20.310 "num_base_bdevs_operational": 3, 00:17:20.310 "base_bdevs_list": [ 00:17:20.310 { 00:17:20.310 "name": "spare", 00:17:20.310 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:20.310 "is_configured": true, 00:17:20.310 "data_offset": 2048, 00:17:20.310 "data_size": 63488 00:17:20.310 }, 00:17:20.310 { 00:17:20.310 "name": "BaseBdev2", 00:17:20.310 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:20.310 "is_configured": true, 00:17:20.310 "data_offset": 2048, 00:17:20.310 "data_size": 63488 00:17:20.310 }, 00:17:20.310 { 00:17:20.310 "name": "BaseBdev3", 00:17:20.310 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:20.310 "is_configured": true, 00:17:20.310 "data_offset": 2048, 00:17:20.310 "data_size": 63488 00:17:20.310 } 00:17:20.310 ] 00:17:20.310 }' 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.310 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.311 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.311 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.311 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.311 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.311 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.311 "name": "raid_bdev1", 00:17:20.311 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:20.311 "strip_size_kb": 64, 00:17:20.311 "state": "online", 00:17:20.311 "raid_level": "raid5f", 00:17:20.311 "superblock": true, 00:17:20.311 "num_base_bdevs": 3, 00:17:20.311 "num_base_bdevs_discovered": 3, 00:17:20.311 "num_base_bdevs_operational": 3, 00:17:20.311 "base_bdevs_list": [ 00:17:20.311 { 00:17:20.311 "name": "spare", 00:17:20.311 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:20.311 "is_configured": true, 00:17:20.311 "data_offset": 2048, 00:17:20.311 "data_size": 63488 00:17:20.311 }, 00:17:20.311 { 00:17:20.311 "name": "BaseBdev2", 00:17:20.311 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:20.311 "is_configured": true, 00:17:20.311 "data_offset": 2048, 00:17:20.311 "data_size": 63488 00:17:20.311 }, 00:17:20.311 { 00:17:20.311 "name": "BaseBdev3", 00:17:20.311 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:20.311 "is_configured": true, 00:17:20.311 "data_offset": 2048, 00:17:20.311 "data_size": 63488 00:17:20.311 } 00:17:20.311 ] 00:17:20.311 }' 00:17:20.311 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.311 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.879 [2024-11-15 11:29:03.695611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.879 [2024-11-15 11:29:03.695650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.879 [2024-11-15 11:29:03.695767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.879 [2024-11-15 11:29:03.695874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.879 [2024-11-15 11:29:03.695898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:20.879 11:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:21.138 /dev/nbd0 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.138 1+0 records in 00:17:21.138 1+0 records out 00:17:21.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466601 s, 8.8 MB/s 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:21.138 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:21.707 /dev/nbd1 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.707 1+0 records in 00:17:21.707 1+0 records out 00:17:21.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247206 s, 16.6 MB/s 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.707 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.273 11:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.533 [2024-11-15 11:29:05.289196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:22.533 [2024-11-15 11:29:05.289347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.533 [2024-11-15 11:29:05.289381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:22.533 [2024-11-15 11:29:05.289401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.533 [2024-11-15 11:29:05.292874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.533 [2024-11-15 11:29:05.292937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:22.533 [2024-11-15 11:29:05.293076] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:22.533 [2024-11-15 11:29:05.293144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.533 [2024-11-15 11:29:05.293345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.533 spare 00:17:22.533 [2024-11-15 11:29:05.293603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.533 [2024-11-15 11:29:05.393752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:22.533 [2024-11-15 11:29:05.393844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:22.533 [2024-11-15 11:29:05.394404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:22.533 [2024-11-15 11:29:05.399027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:22.533 [2024-11-15 11:29:05.399051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:22.533 [2024-11-15 11:29:05.399369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.533 "name": "raid_bdev1", 00:17:22.533 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:22.533 "strip_size_kb": 64, 00:17:22.533 "state": "online", 00:17:22.533 "raid_level": "raid5f", 00:17:22.533 "superblock": true, 00:17:22.533 "num_base_bdevs": 3, 00:17:22.533 "num_base_bdevs_discovered": 3, 00:17:22.533 "num_base_bdevs_operational": 3, 00:17:22.533 "base_bdevs_list": [ 00:17:22.533 { 00:17:22.533 "name": "spare", 00:17:22.533 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:22.533 "is_configured": true, 00:17:22.533 "data_offset": 2048, 00:17:22.533 "data_size": 63488 00:17:22.533 }, 00:17:22.533 { 00:17:22.533 "name": "BaseBdev2", 00:17:22.533 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:22.533 "is_configured": true, 00:17:22.533 "data_offset": 2048, 00:17:22.533 "data_size": 63488 00:17:22.533 }, 00:17:22.533 { 00:17:22.533 "name": "BaseBdev3", 00:17:22.533 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:22.533 "is_configured": true, 00:17:22.533 "data_offset": 2048, 00:17:22.533 "data_size": 63488 00:17:22.533 } 00:17:22.533 ] 00:17:22.533 }' 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.533 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.101 "name": "raid_bdev1", 00:17:23.101 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:23.101 "strip_size_kb": 64, 00:17:23.101 "state": "online", 00:17:23.101 "raid_level": "raid5f", 00:17:23.101 "superblock": true, 00:17:23.101 "num_base_bdevs": 3, 00:17:23.101 "num_base_bdevs_discovered": 3, 00:17:23.101 "num_base_bdevs_operational": 3, 00:17:23.101 "base_bdevs_list": [ 00:17:23.101 { 00:17:23.101 "name": "spare", 00:17:23.101 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:23.101 "is_configured": true, 00:17:23.101 "data_offset": 2048, 00:17:23.101 "data_size": 63488 00:17:23.101 }, 00:17:23.101 { 00:17:23.101 "name": "BaseBdev2", 00:17:23.101 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:23.101 "is_configured": true, 00:17:23.101 "data_offset": 2048, 00:17:23.101 "data_size": 63488 00:17:23.101 }, 00:17:23.101 { 00:17:23.101 "name": "BaseBdev3", 00:17:23.101 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:23.101 "is_configured": true, 00:17:23.101 "data_offset": 2048, 00:17:23.101 "data_size": 63488 00:17:23.101 } 00:17:23.101 ] 00:17:23.101 }' 00:17:23.101 11:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.101 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.102 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.361 [2024-11-15 11:29:06.161154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.361 "name": "raid_bdev1", 00:17:23.361 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:23.361 "strip_size_kb": 64, 00:17:23.361 "state": "online", 00:17:23.361 "raid_level": "raid5f", 00:17:23.361 "superblock": true, 00:17:23.361 "num_base_bdevs": 3, 00:17:23.361 "num_base_bdevs_discovered": 2, 00:17:23.361 "num_base_bdevs_operational": 2, 00:17:23.361 "base_bdevs_list": [ 00:17:23.361 { 00:17:23.361 "name": null, 00:17:23.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.361 "is_configured": false, 00:17:23.361 "data_offset": 0, 00:17:23.361 "data_size": 63488 00:17:23.361 }, 00:17:23.361 { 00:17:23.361 "name": "BaseBdev2", 00:17:23.361 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:23.361 "is_configured": true, 00:17:23.361 "data_offset": 2048, 00:17:23.361 "data_size": 63488 00:17:23.361 }, 00:17:23.361 { 00:17:23.361 "name": "BaseBdev3", 00:17:23.361 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:23.361 "is_configured": true, 00:17:23.361 "data_offset": 2048, 00:17:23.361 "data_size": 63488 00:17:23.361 } 00:17:23.361 ] 00:17:23.361 }' 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.361 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.929 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:23.929 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.929 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.929 [2024-11-15 11:29:06.665343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.929 [2024-11-15 11:29:06.665644] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.929 [2024-11-15 11:29:06.665671] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:23.929 [2024-11-15 11:29:06.665738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.929 [2024-11-15 11:29:06.680777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:23.929 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.929 11:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:23.929 [2024-11-15 11:29:06.687692] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.940 "name": "raid_bdev1", 00:17:24.940 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:24.940 "strip_size_kb": 64, 00:17:24.940 "state": "online", 00:17:24.940 "raid_level": "raid5f", 00:17:24.940 "superblock": true, 00:17:24.940 "num_base_bdevs": 3, 00:17:24.940 "num_base_bdevs_discovered": 3, 00:17:24.940 "num_base_bdevs_operational": 3, 00:17:24.940 "process": { 00:17:24.940 "type": "rebuild", 00:17:24.940 "target": "spare", 00:17:24.940 "progress": { 00:17:24.940 "blocks": 18432, 00:17:24.940 "percent": 14 00:17:24.940 } 00:17:24.940 }, 00:17:24.940 "base_bdevs_list": [ 00:17:24.940 { 00:17:24.940 "name": "spare", 00:17:24.940 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:24.940 "is_configured": true, 00:17:24.940 "data_offset": 2048, 00:17:24.940 "data_size": 63488 00:17:24.940 }, 00:17:24.940 { 00:17:24.940 "name": "BaseBdev2", 00:17:24.940 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:24.940 "is_configured": true, 00:17:24.940 "data_offset": 2048, 00:17:24.940 "data_size": 63488 00:17:24.940 }, 00:17:24.940 { 00:17:24.940 "name": "BaseBdev3", 00:17:24.940 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:24.940 "is_configured": true, 00:17:24.940 "data_offset": 2048, 00:17:24.940 "data_size": 63488 00:17:24.940 } 00:17:24.940 ] 00:17:24.940 }' 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.940 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.940 [2024-11-15 11:29:07.857770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.199 [2024-11-15 11:29:07.904480] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:25.199 [2024-11-15 11:29:07.904620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.199 [2024-11-15 11:29:07.904647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.199 [2024-11-15 11:29:07.904665] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:25.199 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.199 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:25.199 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.199 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.199 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.199 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.200 "name": "raid_bdev1", 00:17:25.200 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:25.200 "strip_size_kb": 64, 00:17:25.200 "state": "online", 00:17:25.200 "raid_level": "raid5f", 00:17:25.200 "superblock": true, 00:17:25.200 "num_base_bdevs": 3, 00:17:25.200 "num_base_bdevs_discovered": 2, 00:17:25.200 "num_base_bdevs_operational": 2, 00:17:25.200 "base_bdevs_list": [ 00:17:25.200 { 00:17:25.200 "name": null, 00:17:25.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.200 "is_configured": false, 00:17:25.200 "data_offset": 0, 00:17:25.200 "data_size": 63488 00:17:25.200 }, 00:17:25.200 { 00:17:25.200 "name": "BaseBdev2", 00:17:25.200 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:25.200 "is_configured": true, 00:17:25.200 "data_offset": 2048, 00:17:25.200 "data_size": 63488 00:17:25.200 }, 00:17:25.200 { 00:17:25.200 "name": "BaseBdev3", 00:17:25.200 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:25.200 "is_configured": true, 00:17:25.200 "data_offset": 2048, 00:17:25.200 "data_size": 63488 00:17:25.200 } 00:17:25.200 ] 00:17:25.200 }' 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.200 11:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.767 11:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.767 11:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.767 11:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.767 [2024-11-15 11:29:08.467690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.767 [2024-11-15 11:29:08.467826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.767 [2024-11-15 11:29:08.467859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:25.767 [2024-11-15 11:29:08.467880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.767 [2024-11-15 11:29:08.468657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.767 [2024-11-15 11:29:08.468846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.767 [2024-11-15 11:29:08.469024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:25.767 [2024-11-15 11:29:08.469050] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.767 [2024-11-15 11:29:08.469065] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:25.767 [2024-11-15 11:29:08.469116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.767 [2024-11-15 11:29:08.483944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:25.767 spare 00:17:25.767 11:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.767 11:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:25.767 [2024-11-15 11:29:08.491207] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.704 "name": "raid_bdev1", 00:17:26.704 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:26.704 "strip_size_kb": 64, 00:17:26.704 "state": "online", 00:17:26.704 "raid_level": "raid5f", 00:17:26.704 "superblock": true, 00:17:26.704 "num_base_bdevs": 3, 00:17:26.704 "num_base_bdevs_discovered": 3, 00:17:26.704 "num_base_bdevs_operational": 3, 00:17:26.704 "process": { 00:17:26.704 "type": "rebuild", 00:17:26.704 "target": "spare", 00:17:26.704 "progress": { 00:17:26.704 "blocks": 18432, 00:17:26.704 "percent": 14 00:17:26.704 } 00:17:26.704 }, 00:17:26.704 "base_bdevs_list": [ 00:17:26.704 { 00:17:26.704 "name": "spare", 00:17:26.704 "uuid": "5fbf4cad-5dd5-5c66-bd40-dbdbc9324468", 00:17:26.704 "is_configured": true, 00:17:26.704 "data_offset": 2048, 00:17:26.704 "data_size": 63488 00:17:26.704 }, 00:17:26.704 { 00:17:26.704 "name": "BaseBdev2", 00:17:26.704 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:26.704 "is_configured": true, 00:17:26.704 "data_offset": 2048, 00:17:26.704 "data_size": 63488 00:17:26.704 }, 00:17:26.704 { 00:17:26.704 "name": "BaseBdev3", 00:17:26.704 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:26.704 "is_configured": true, 00:17:26.704 "data_offset": 2048, 00:17:26.704 "data_size": 63488 00:17:26.704 } 00:17:26.704 ] 00:17:26.704 }' 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.704 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.963 [2024-11-15 11:29:09.653698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.963 [2024-11-15 11:29:09.708847] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.963 [2024-11-15 11:29:09.708923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.963 [2024-11-15 11:29:09.708953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.963 [2024-11-15 11:29:09.708963] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.963 "name": "raid_bdev1", 00:17:26.963 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:26.963 "strip_size_kb": 64, 00:17:26.963 "state": "online", 00:17:26.963 "raid_level": "raid5f", 00:17:26.963 "superblock": true, 00:17:26.963 "num_base_bdevs": 3, 00:17:26.963 "num_base_bdevs_discovered": 2, 00:17:26.963 "num_base_bdevs_operational": 2, 00:17:26.963 "base_bdevs_list": [ 00:17:26.963 { 00:17:26.963 "name": null, 00:17:26.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.963 "is_configured": false, 00:17:26.963 "data_offset": 0, 00:17:26.963 "data_size": 63488 00:17:26.963 }, 00:17:26.963 { 00:17:26.963 "name": "BaseBdev2", 00:17:26.963 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:26.963 "is_configured": true, 00:17:26.963 "data_offset": 2048, 00:17:26.963 "data_size": 63488 00:17:26.963 }, 00:17:26.963 { 00:17:26.963 "name": "BaseBdev3", 00:17:26.963 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:26.963 "is_configured": true, 00:17:26.963 "data_offset": 2048, 00:17:26.963 "data_size": 63488 00:17:26.963 } 00:17:26.963 ] 00:17:26.963 }' 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.963 11:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.530 "name": "raid_bdev1", 00:17:27.530 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:27.530 "strip_size_kb": 64, 00:17:27.530 "state": "online", 00:17:27.530 "raid_level": "raid5f", 00:17:27.530 "superblock": true, 00:17:27.530 "num_base_bdevs": 3, 00:17:27.530 "num_base_bdevs_discovered": 2, 00:17:27.530 "num_base_bdevs_operational": 2, 00:17:27.530 "base_bdevs_list": [ 00:17:27.530 { 00:17:27.530 "name": null, 00:17:27.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.530 "is_configured": false, 00:17:27.530 "data_offset": 0, 00:17:27.530 "data_size": 63488 00:17:27.530 }, 00:17:27.530 { 00:17:27.530 "name": "BaseBdev2", 00:17:27.530 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:27.530 "is_configured": true, 00:17:27.530 "data_offset": 2048, 00:17:27.530 "data_size": 63488 00:17:27.530 }, 00:17:27.530 { 00:17:27.530 "name": "BaseBdev3", 00:17:27.530 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:27.530 "is_configured": true, 00:17:27.530 "data_offset": 2048, 00:17:27.530 "data_size": 63488 00:17:27.530 } 00:17:27.530 ] 00:17:27.530 }' 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.530 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.530 [2024-11-15 11:29:10.444363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:27.530 [2024-11-15 11:29:10.444606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.530 [2024-11-15 11:29:10.444686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:27.531 [2024-11-15 11:29:10.444702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.531 [2024-11-15 11:29:10.445398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.531 [2024-11-15 11:29:10.445425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:27.531 [2024-11-15 11:29:10.445545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:27.531 [2024-11-15 11:29:10.445581] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.531 [2024-11-15 11:29:10.445610] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:27.531 [2024-11-15 11:29:10.445624] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:27.531 BaseBdev1 00:17:27.531 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.531 11:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.904 "name": "raid_bdev1", 00:17:28.904 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:28.904 "strip_size_kb": 64, 00:17:28.904 "state": "online", 00:17:28.904 "raid_level": "raid5f", 00:17:28.904 "superblock": true, 00:17:28.904 "num_base_bdevs": 3, 00:17:28.904 "num_base_bdevs_discovered": 2, 00:17:28.904 "num_base_bdevs_operational": 2, 00:17:28.904 "base_bdevs_list": [ 00:17:28.904 { 00:17:28.904 "name": null, 00:17:28.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.904 "is_configured": false, 00:17:28.904 "data_offset": 0, 00:17:28.904 "data_size": 63488 00:17:28.904 }, 00:17:28.904 { 00:17:28.904 "name": "BaseBdev2", 00:17:28.904 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:28.904 "is_configured": true, 00:17:28.904 "data_offset": 2048, 00:17:28.904 "data_size": 63488 00:17:28.904 }, 00:17:28.904 { 00:17:28.904 "name": "BaseBdev3", 00:17:28.904 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:28.904 "is_configured": true, 00:17:28.904 "data_offset": 2048, 00:17:28.904 "data_size": 63488 00:17:28.904 } 00:17:28.904 ] 00:17:28.904 }' 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.904 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.162 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.162 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.162 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.162 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.162 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.162 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.162 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.163 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.163 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.163 11:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.163 "name": "raid_bdev1", 00:17:29.163 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:29.163 "strip_size_kb": 64, 00:17:29.163 "state": "online", 00:17:29.163 "raid_level": "raid5f", 00:17:29.163 "superblock": true, 00:17:29.163 "num_base_bdevs": 3, 00:17:29.163 "num_base_bdevs_discovered": 2, 00:17:29.163 "num_base_bdevs_operational": 2, 00:17:29.163 "base_bdevs_list": [ 00:17:29.163 { 00:17:29.163 "name": null, 00:17:29.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.163 "is_configured": false, 00:17:29.163 "data_offset": 0, 00:17:29.163 "data_size": 63488 00:17:29.163 }, 00:17:29.163 { 00:17:29.163 "name": "BaseBdev2", 00:17:29.163 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:29.163 "is_configured": true, 00:17:29.163 "data_offset": 2048, 00:17:29.163 "data_size": 63488 00:17:29.163 }, 00:17:29.163 { 00:17:29.163 "name": "BaseBdev3", 00:17:29.163 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:29.163 "is_configured": true, 00:17:29.163 "data_offset": 2048, 00:17:29.163 "data_size": 63488 00:17:29.163 } 00:17:29.163 ] 00:17:29.163 }' 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.163 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.421 [2024-11-15 11:29:12.112899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.421 [2024-11-15 11:29:12.113327] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:29.421 [2024-11-15 11:29:12.113362] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:29.421 request: 00:17:29.421 { 00:17:29.421 "base_bdev": "BaseBdev1", 00:17:29.421 "raid_bdev": "raid_bdev1", 00:17:29.421 "method": "bdev_raid_add_base_bdev", 00:17:29.421 "req_id": 1 00:17:29.421 } 00:17:29.421 Got JSON-RPC error response 00:17:29.421 response: 00:17:29.421 { 00:17:29.421 "code": -22, 00:17:29.421 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:29.421 } 00:17:29.421 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:29.421 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:29.421 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.421 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.421 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.421 11:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.358 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.358 "name": "raid_bdev1", 00:17:30.358 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:30.358 "strip_size_kb": 64, 00:17:30.358 "state": "online", 00:17:30.358 "raid_level": "raid5f", 00:17:30.358 "superblock": true, 00:17:30.358 "num_base_bdevs": 3, 00:17:30.358 "num_base_bdevs_discovered": 2, 00:17:30.358 "num_base_bdevs_operational": 2, 00:17:30.358 "base_bdevs_list": [ 00:17:30.358 { 00:17:30.358 "name": null, 00:17:30.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.358 "is_configured": false, 00:17:30.358 "data_offset": 0, 00:17:30.358 "data_size": 63488 00:17:30.358 }, 00:17:30.358 { 00:17:30.358 "name": "BaseBdev2", 00:17:30.358 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:30.359 "is_configured": true, 00:17:30.359 "data_offset": 2048, 00:17:30.359 "data_size": 63488 00:17:30.359 }, 00:17:30.359 { 00:17:30.359 "name": "BaseBdev3", 00:17:30.359 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:30.359 "is_configured": true, 00:17:30.359 "data_offset": 2048, 00:17:30.359 "data_size": 63488 00:17:30.359 } 00:17:30.359 ] 00:17:30.359 }' 00:17:30.359 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.359 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.926 "name": "raid_bdev1", 00:17:30.926 "uuid": "fbd40bf2-17e3-442e-9997-219ba2bcb360", 00:17:30.926 "strip_size_kb": 64, 00:17:30.926 "state": "online", 00:17:30.926 "raid_level": "raid5f", 00:17:30.926 "superblock": true, 00:17:30.926 "num_base_bdevs": 3, 00:17:30.926 "num_base_bdevs_discovered": 2, 00:17:30.926 "num_base_bdevs_operational": 2, 00:17:30.926 "base_bdevs_list": [ 00:17:30.926 { 00:17:30.926 "name": null, 00:17:30.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.926 "is_configured": false, 00:17:30.926 "data_offset": 0, 00:17:30.926 "data_size": 63488 00:17:30.926 }, 00:17:30.926 { 00:17:30.926 "name": "BaseBdev2", 00:17:30.926 "uuid": "1b196cd3-c054-5532-842a-f276a5ed2706", 00:17:30.926 "is_configured": true, 00:17:30.926 "data_offset": 2048, 00:17:30.926 "data_size": 63488 00:17:30.926 }, 00:17:30.926 { 00:17:30.926 "name": "BaseBdev3", 00:17:30.926 "uuid": "5cbec9e4-608d-5959-8500-76fe67723e98", 00:17:30.926 "is_configured": true, 00:17:30.926 "data_offset": 2048, 00:17:30.926 "data_size": 63488 00:17:30.926 } 00:17:30.926 ] 00:17:30.926 }' 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82267 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82267 ']' 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82267 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82267 00:17:30.926 killing process with pid 82267 00:17:30.926 Received shutdown signal, test time was about 60.000000 seconds 00:17:30.926 00:17:30.926 Latency(us) 00:17:30.926 [2024-11-15T11:29:13.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.926 [2024-11-15T11:29:13.876Z] =================================================================================================================== 00:17:30.926 [2024-11-15T11:29:13.876Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82267' 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82267 00:17:30.926 11:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82267 00:17:30.926 [2024-11-15 11:29:13.841882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.926 [2024-11-15 11:29:13.842066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.926 [2024-11-15 11:29:13.842239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.926 [2024-11-15 11:29:13.842317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:31.493 [2024-11-15 11:29:14.170709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.431 ************************************ 00:17:32.431 END TEST raid5f_rebuild_test_sb 00:17:32.431 ************************************ 00:17:32.431 11:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:32.431 00:17:32.431 real 0m24.929s 00:17:32.431 user 0m33.066s 00:17:32.431 sys 0m2.790s 00:17:32.431 11:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:32.431 11:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.431 11:29:15 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:32.431 11:29:15 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:32.431 11:29:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:32.431 11:29:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:32.431 11:29:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.431 ************************************ 00:17:32.431 START TEST raid5f_state_function_test 00:17:32.431 ************************************ 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83036 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:32.431 Process raid pid: 83036 00:17:32.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83036' 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83036 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83036 ']' 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:32.431 11:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.691 [2024-11-15 11:29:15.395197] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:17:32.691 [2024-11-15 11:29:15.395668] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.691 [2024-11-15 11:29:15.583400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.951 [2024-11-15 11:29:15.722203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.210 [2024-11-15 11:29:15.928809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.210 [2024-11-15 11:29:15.928871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.469 [2024-11-15 11:29:16.364967] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.469 [2024-11-15 11:29:16.365278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.469 [2024-11-15 11:29:16.365431] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.469 [2024-11-15 11:29:16.365468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.469 [2024-11-15 11:29:16.365481] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.469 [2024-11-15 11:29:16.365497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.469 [2024-11-15 11:29:16.365508] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:33.469 [2024-11-15 11:29:16.365523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.469 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.729 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.729 "name": "Existed_Raid", 00:17:33.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.729 "strip_size_kb": 64, 00:17:33.729 "state": "configuring", 00:17:33.729 "raid_level": "raid5f", 00:17:33.729 "superblock": false, 00:17:33.729 "num_base_bdevs": 4, 00:17:33.729 "num_base_bdevs_discovered": 0, 00:17:33.729 "num_base_bdevs_operational": 4, 00:17:33.729 "base_bdevs_list": [ 00:17:33.729 { 00:17:33.729 "name": "BaseBdev1", 00:17:33.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.729 "is_configured": false, 00:17:33.729 "data_offset": 0, 00:17:33.729 "data_size": 0 00:17:33.729 }, 00:17:33.729 { 00:17:33.729 "name": "BaseBdev2", 00:17:33.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.729 "is_configured": false, 00:17:33.729 "data_offset": 0, 00:17:33.729 "data_size": 0 00:17:33.729 }, 00:17:33.729 { 00:17:33.729 "name": "BaseBdev3", 00:17:33.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.729 "is_configured": false, 00:17:33.729 "data_offset": 0, 00:17:33.729 "data_size": 0 00:17:33.729 }, 00:17:33.729 { 00:17:33.729 "name": "BaseBdev4", 00:17:33.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.729 "is_configured": false, 00:17:33.729 "data_offset": 0, 00:17:33.729 "data_size": 0 00:17:33.729 } 00:17:33.729 ] 00:17:33.729 }' 00:17:33.729 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.729 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.987 [2024-11-15 11:29:16.864930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.987 [2024-11-15 11:29:16.864977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.987 [2024-11-15 11:29:16.872908] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.987 [2024-11-15 11:29:16.873120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.987 [2024-11-15 11:29:16.873287] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.987 [2024-11-15 11:29:16.873322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.987 [2024-11-15 11:29:16.873333] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.987 [2024-11-15 11:29:16.873348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.987 [2024-11-15 11:29:16.873357] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:33.987 [2024-11-15 11:29:16.873371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.987 [2024-11-15 11:29:16.917950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.987 BaseBdev1 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:33.987 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.988 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.246 [ 00:17:34.246 { 00:17:34.246 "name": "BaseBdev1", 00:17:34.246 "aliases": [ 00:17:34.246 "5bb16d24-d247-4d36-a4ed-14a0662786b3" 00:17:34.246 ], 00:17:34.246 "product_name": "Malloc disk", 00:17:34.246 "block_size": 512, 00:17:34.246 "num_blocks": 65536, 00:17:34.246 "uuid": "5bb16d24-d247-4d36-a4ed-14a0662786b3", 00:17:34.246 "assigned_rate_limits": { 00:17:34.246 "rw_ios_per_sec": 0, 00:17:34.246 "rw_mbytes_per_sec": 0, 00:17:34.246 "r_mbytes_per_sec": 0, 00:17:34.246 "w_mbytes_per_sec": 0 00:17:34.246 }, 00:17:34.246 "claimed": true, 00:17:34.246 "claim_type": "exclusive_write", 00:17:34.246 "zoned": false, 00:17:34.246 "supported_io_types": { 00:17:34.246 "read": true, 00:17:34.246 "write": true, 00:17:34.246 "unmap": true, 00:17:34.246 "flush": true, 00:17:34.246 "reset": true, 00:17:34.246 "nvme_admin": false, 00:17:34.246 "nvme_io": false, 00:17:34.246 "nvme_io_md": false, 00:17:34.246 "write_zeroes": true, 00:17:34.246 "zcopy": true, 00:17:34.246 "get_zone_info": false, 00:17:34.246 "zone_management": false, 00:17:34.246 "zone_append": false, 00:17:34.246 "compare": false, 00:17:34.246 "compare_and_write": false, 00:17:34.246 "abort": true, 00:17:34.246 "seek_hole": false, 00:17:34.246 "seek_data": false, 00:17:34.246 "copy": true, 00:17:34.246 "nvme_iov_md": false 00:17:34.246 }, 00:17:34.246 "memory_domains": [ 00:17:34.246 { 00:17:34.246 "dma_device_id": "system", 00:17:34.246 "dma_device_type": 1 00:17:34.246 }, 00:17:34.246 { 00:17:34.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.246 "dma_device_type": 2 00:17:34.246 } 00:17:34.246 ], 00:17:34.246 "driver_specific": {} 00:17:34.246 } 00:17:34.246 ] 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.246 11:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.246 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.246 "name": "Existed_Raid", 00:17:34.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.247 "strip_size_kb": 64, 00:17:34.247 "state": "configuring", 00:17:34.247 "raid_level": "raid5f", 00:17:34.247 "superblock": false, 00:17:34.247 "num_base_bdevs": 4, 00:17:34.247 "num_base_bdevs_discovered": 1, 00:17:34.247 "num_base_bdevs_operational": 4, 00:17:34.247 "base_bdevs_list": [ 00:17:34.247 { 00:17:34.247 "name": "BaseBdev1", 00:17:34.247 "uuid": "5bb16d24-d247-4d36-a4ed-14a0662786b3", 00:17:34.247 "is_configured": true, 00:17:34.247 "data_offset": 0, 00:17:34.247 "data_size": 65536 00:17:34.247 }, 00:17:34.247 { 00:17:34.247 "name": "BaseBdev2", 00:17:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.247 "is_configured": false, 00:17:34.247 "data_offset": 0, 00:17:34.247 "data_size": 0 00:17:34.247 }, 00:17:34.247 { 00:17:34.247 "name": "BaseBdev3", 00:17:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.247 "is_configured": false, 00:17:34.247 "data_offset": 0, 00:17:34.247 "data_size": 0 00:17:34.247 }, 00:17:34.247 { 00:17:34.247 "name": "BaseBdev4", 00:17:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.247 "is_configured": false, 00:17:34.247 "data_offset": 0, 00:17:34.247 "data_size": 0 00:17:34.247 } 00:17:34.247 ] 00:17:34.247 }' 00:17:34.247 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.247 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.813 [2024-11-15 11:29:17.466217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.813 [2024-11-15 11:29:17.466435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.813 [2024-11-15 11:29:17.474287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.813 [2024-11-15 11:29:17.477133] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.813 [2024-11-15 11:29:17.477422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.813 [2024-11-15 11:29:17.477575] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:34.813 [2024-11-15 11:29:17.477701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:34.813 [2024-11-15 11:29:17.477821] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:34.813 [2024-11-15 11:29:17.477979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.813 "name": "Existed_Raid", 00:17:34.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.813 "strip_size_kb": 64, 00:17:34.813 "state": "configuring", 00:17:34.813 "raid_level": "raid5f", 00:17:34.813 "superblock": false, 00:17:34.813 "num_base_bdevs": 4, 00:17:34.813 "num_base_bdevs_discovered": 1, 00:17:34.813 "num_base_bdevs_operational": 4, 00:17:34.813 "base_bdevs_list": [ 00:17:34.813 { 00:17:34.813 "name": "BaseBdev1", 00:17:34.813 "uuid": "5bb16d24-d247-4d36-a4ed-14a0662786b3", 00:17:34.813 "is_configured": true, 00:17:34.813 "data_offset": 0, 00:17:34.813 "data_size": 65536 00:17:34.813 }, 00:17:34.813 { 00:17:34.813 "name": "BaseBdev2", 00:17:34.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.813 "is_configured": false, 00:17:34.813 "data_offset": 0, 00:17:34.813 "data_size": 0 00:17:34.813 }, 00:17:34.813 { 00:17:34.813 "name": "BaseBdev3", 00:17:34.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.813 "is_configured": false, 00:17:34.813 "data_offset": 0, 00:17:34.813 "data_size": 0 00:17:34.813 }, 00:17:34.813 { 00:17:34.813 "name": "BaseBdev4", 00:17:34.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.813 "is_configured": false, 00:17:34.813 "data_offset": 0, 00:17:34.813 "data_size": 0 00:17:34.813 } 00:17:34.813 ] 00:17:34.813 }' 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.813 11:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.072 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:35.072 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.072 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.330 [2024-11-15 11:29:18.045925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.330 BaseBdev2 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.330 [ 00:17:35.330 { 00:17:35.330 "name": "BaseBdev2", 00:17:35.330 "aliases": [ 00:17:35.330 "093e9574-f169-4818-b058-82ff7aac9e2d" 00:17:35.330 ], 00:17:35.330 "product_name": "Malloc disk", 00:17:35.330 "block_size": 512, 00:17:35.330 "num_blocks": 65536, 00:17:35.330 "uuid": "093e9574-f169-4818-b058-82ff7aac9e2d", 00:17:35.330 "assigned_rate_limits": { 00:17:35.330 "rw_ios_per_sec": 0, 00:17:35.330 "rw_mbytes_per_sec": 0, 00:17:35.330 "r_mbytes_per_sec": 0, 00:17:35.330 "w_mbytes_per_sec": 0 00:17:35.330 }, 00:17:35.330 "claimed": true, 00:17:35.330 "claim_type": "exclusive_write", 00:17:35.330 "zoned": false, 00:17:35.330 "supported_io_types": { 00:17:35.330 "read": true, 00:17:35.330 "write": true, 00:17:35.330 "unmap": true, 00:17:35.330 "flush": true, 00:17:35.330 "reset": true, 00:17:35.330 "nvme_admin": false, 00:17:35.330 "nvme_io": false, 00:17:35.330 "nvme_io_md": false, 00:17:35.330 "write_zeroes": true, 00:17:35.330 "zcopy": true, 00:17:35.330 "get_zone_info": false, 00:17:35.330 "zone_management": false, 00:17:35.330 "zone_append": false, 00:17:35.330 "compare": false, 00:17:35.330 "compare_and_write": false, 00:17:35.330 "abort": true, 00:17:35.330 "seek_hole": false, 00:17:35.330 "seek_data": false, 00:17:35.330 "copy": true, 00:17:35.330 "nvme_iov_md": false 00:17:35.330 }, 00:17:35.330 "memory_domains": [ 00:17:35.330 { 00:17:35.330 "dma_device_id": "system", 00:17:35.330 "dma_device_type": 1 00:17:35.330 }, 00:17:35.330 { 00:17:35.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.330 "dma_device_type": 2 00:17:35.330 } 00:17:35.330 ], 00:17:35.330 "driver_specific": {} 00:17:35.330 } 00:17:35.330 ] 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.330 "name": "Existed_Raid", 00:17:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.330 "strip_size_kb": 64, 00:17:35.330 "state": "configuring", 00:17:35.330 "raid_level": "raid5f", 00:17:35.330 "superblock": false, 00:17:35.330 "num_base_bdevs": 4, 00:17:35.330 "num_base_bdevs_discovered": 2, 00:17:35.330 "num_base_bdevs_operational": 4, 00:17:35.330 "base_bdevs_list": [ 00:17:35.330 { 00:17:35.330 "name": "BaseBdev1", 00:17:35.330 "uuid": "5bb16d24-d247-4d36-a4ed-14a0662786b3", 00:17:35.330 "is_configured": true, 00:17:35.330 "data_offset": 0, 00:17:35.330 "data_size": 65536 00:17:35.330 }, 00:17:35.330 { 00:17:35.330 "name": "BaseBdev2", 00:17:35.330 "uuid": "093e9574-f169-4818-b058-82ff7aac9e2d", 00:17:35.330 "is_configured": true, 00:17:35.330 "data_offset": 0, 00:17:35.330 "data_size": 65536 00:17:35.330 }, 00:17:35.330 { 00:17:35.330 "name": "BaseBdev3", 00:17:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.330 "is_configured": false, 00:17:35.330 "data_offset": 0, 00:17:35.330 "data_size": 0 00:17:35.330 }, 00:17:35.330 { 00:17:35.330 "name": "BaseBdev4", 00:17:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.330 "is_configured": false, 00:17:35.330 "data_offset": 0, 00:17:35.330 "data_size": 0 00:17:35.330 } 00:17:35.330 ] 00:17:35.330 }' 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.330 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.895 [2024-11-15 11:29:18.649060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:35.895 BaseBdev3 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.895 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.895 [ 00:17:35.895 { 00:17:35.896 "name": "BaseBdev3", 00:17:35.896 "aliases": [ 00:17:35.896 "e6386b32-2ebf-42e9-a4d5-ad5a74f7ee90" 00:17:35.896 ], 00:17:35.896 "product_name": "Malloc disk", 00:17:35.896 "block_size": 512, 00:17:35.896 "num_blocks": 65536, 00:17:35.896 "uuid": "e6386b32-2ebf-42e9-a4d5-ad5a74f7ee90", 00:17:35.896 "assigned_rate_limits": { 00:17:35.896 "rw_ios_per_sec": 0, 00:17:35.896 "rw_mbytes_per_sec": 0, 00:17:35.896 "r_mbytes_per_sec": 0, 00:17:35.896 "w_mbytes_per_sec": 0 00:17:35.896 }, 00:17:35.896 "claimed": true, 00:17:35.896 "claim_type": "exclusive_write", 00:17:35.896 "zoned": false, 00:17:35.896 "supported_io_types": { 00:17:35.896 "read": true, 00:17:35.896 "write": true, 00:17:35.896 "unmap": true, 00:17:35.896 "flush": true, 00:17:35.896 "reset": true, 00:17:35.896 "nvme_admin": false, 00:17:35.896 "nvme_io": false, 00:17:35.896 "nvme_io_md": false, 00:17:35.896 "write_zeroes": true, 00:17:35.896 "zcopy": true, 00:17:35.896 "get_zone_info": false, 00:17:35.896 "zone_management": false, 00:17:35.896 "zone_append": false, 00:17:35.896 "compare": false, 00:17:35.896 "compare_and_write": false, 00:17:35.896 "abort": true, 00:17:35.896 "seek_hole": false, 00:17:35.896 "seek_data": false, 00:17:35.896 "copy": true, 00:17:35.896 "nvme_iov_md": false 00:17:35.896 }, 00:17:35.896 "memory_domains": [ 00:17:35.896 { 00:17:35.896 "dma_device_id": "system", 00:17:35.896 "dma_device_type": 1 00:17:35.896 }, 00:17:35.896 { 00:17:35.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.896 "dma_device_type": 2 00:17:35.896 } 00:17:35.896 ], 00:17:35.896 "driver_specific": {} 00:17:35.896 } 00:17:35.896 ] 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.896 "name": "Existed_Raid", 00:17:35.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.896 "strip_size_kb": 64, 00:17:35.896 "state": "configuring", 00:17:35.896 "raid_level": "raid5f", 00:17:35.896 "superblock": false, 00:17:35.896 "num_base_bdevs": 4, 00:17:35.896 "num_base_bdevs_discovered": 3, 00:17:35.896 "num_base_bdevs_operational": 4, 00:17:35.896 "base_bdevs_list": [ 00:17:35.896 { 00:17:35.896 "name": "BaseBdev1", 00:17:35.896 "uuid": "5bb16d24-d247-4d36-a4ed-14a0662786b3", 00:17:35.896 "is_configured": true, 00:17:35.896 "data_offset": 0, 00:17:35.896 "data_size": 65536 00:17:35.896 }, 00:17:35.896 { 00:17:35.896 "name": "BaseBdev2", 00:17:35.896 "uuid": "093e9574-f169-4818-b058-82ff7aac9e2d", 00:17:35.896 "is_configured": true, 00:17:35.896 "data_offset": 0, 00:17:35.896 "data_size": 65536 00:17:35.896 }, 00:17:35.896 { 00:17:35.896 "name": "BaseBdev3", 00:17:35.896 "uuid": "e6386b32-2ebf-42e9-a4d5-ad5a74f7ee90", 00:17:35.896 "is_configured": true, 00:17:35.896 "data_offset": 0, 00:17:35.896 "data_size": 65536 00:17:35.896 }, 00:17:35.896 { 00:17:35.896 "name": "BaseBdev4", 00:17:35.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.896 "is_configured": false, 00:17:35.896 "data_offset": 0, 00:17:35.896 "data_size": 0 00:17:35.896 } 00:17:35.896 ] 00:17:35.896 }' 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.896 11:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.463 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:36.463 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.463 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.463 [2024-11-15 11:29:19.249050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:36.463 [2024-11-15 11:29:19.249474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:36.463 [2024-11-15 11:29:19.249500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:36.463 [2024-11-15 11:29:19.249903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:36.463 [2024-11-15 11:29:19.256631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:36.463 [2024-11-15 11:29:19.256806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:36.463 [2024-11-15 11:29:19.257354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.463 BaseBdev4 00:17:36.463 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.463 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:36.463 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:36.463 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:36.463 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.464 [ 00:17:36.464 { 00:17:36.464 "name": "BaseBdev4", 00:17:36.464 "aliases": [ 00:17:36.464 "9a411b27-a2f5-43c7-8054-b4db65a36f03" 00:17:36.464 ], 00:17:36.464 "product_name": "Malloc disk", 00:17:36.464 "block_size": 512, 00:17:36.464 "num_blocks": 65536, 00:17:36.464 "uuid": "9a411b27-a2f5-43c7-8054-b4db65a36f03", 00:17:36.464 "assigned_rate_limits": { 00:17:36.464 "rw_ios_per_sec": 0, 00:17:36.464 "rw_mbytes_per_sec": 0, 00:17:36.464 "r_mbytes_per_sec": 0, 00:17:36.464 "w_mbytes_per_sec": 0 00:17:36.464 }, 00:17:36.464 "claimed": true, 00:17:36.464 "claim_type": "exclusive_write", 00:17:36.464 "zoned": false, 00:17:36.464 "supported_io_types": { 00:17:36.464 "read": true, 00:17:36.464 "write": true, 00:17:36.464 "unmap": true, 00:17:36.464 "flush": true, 00:17:36.464 "reset": true, 00:17:36.464 "nvme_admin": false, 00:17:36.464 "nvme_io": false, 00:17:36.464 "nvme_io_md": false, 00:17:36.464 "write_zeroes": true, 00:17:36.464 "zcopy": true, 00:17:36.464 "get_zone_info": false, 00:17:36.464 "zone_management": false, 00:17:36.464 "zone_append": false, 00:17:36.464 "compare": false, 00:17:36.464 "compare_and_write": false, 00:17:36.464 "abort": true, 00:17:36.464 "seek_hole": false, 00:17:36.464 "seek_data": false, 00:17:36.464 "copy": true, 00:17:36.464 "nvme_iov_md": false 00:17:36.464 }, 00:17:36.464 "memory_domains": [ 00:17:36.464 { 00:17:36.464 "dma_device_id": "system", 00:17:36.464 "dma_device_type": 1 00:17:36.464 }, 00:17:36.464 { 00:17:36.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.464 "dma_device_type": 2 00:17:36.464 } 00:17:36.464 ], 00:17:36.464 "driver_specific": {} 00:17:36.464 } 00:17:36.464 ] 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.464 "name": "Existed_Raid", 00:17:36.464 "uuid": "290613ac-9643-48c2-8ba8-3d015386afa3", 00:17:36.464 "strip_size_kb": 64, 00:17:36.464 "state": "online", 00:17:36.464 "raid_level": "raid5f", 00:17:36.464 "superblock": false, 00:17:36.464 "num_base_bdevs": 4, 00:17:36.464 "num_base_bdevs_discovered": 4, 00:17:36.464 "num_base_bdevs_operational": 4, 00:17:36.464 "base_bdevs_list": [ 00:17:36.464 { 00:17:36.464 "name": "BaseBdev1", 00:17:36.464 "uuid": "5bb16d24-d247-4d36-a4ed-14a0662786b3", 00:17:36.464 "is_configured": true, 00:17:36.464 "data_offset": 0, 00:17:36.464 "data_size": 65536 00:17:36.464 }, 00:17:36.464 { 00:17:36.464 "name": "BaseBdev2", 00:17:36.464 "uuid": "093e9574-f169-4818-b058-82ff7aac9e2d", 00:17:36.464 "is_configured": true, 00:17:36.464 "data_offset": 0, 00:17:36.464 "data_size": 65536 00:17:36.464 }, 00:17:36.464 { 00:17:36.464 "name": "BaseBdev3", 00:17:36.464 "uuid": "e6386b32-2ebf-42e9-a4d5-ad5a74f7ee90", 00:17:36.464 "is_configured": true, 00:17:36.464 "data_offset": 0, 00:17:36.464 "data_size": 65536 00:17:36.464 }, 00:17:36.464 { 00:17:36.464 "name": "BaseBdev4", 00:17:36.464 "uuid": "9a411b27-a2f5-43c7-8054-b4db65a36f03", 00:17:36.464 "is_configured": true, 00:17:36.464 "data_offset": 0, 00:17:36.464 "data_size": 65536 00:17:36.464 } 00:17:36.464 ] 00:17:36.464 }' 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.464 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.032 [2024-11-15 11:29:19.825884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.032 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:37.032 "name": "Existed_Raid", 00:17:37.032 "aliases": [ 00:17:37.032 "290613ac-9643-48c2-8ba8-3d015386afa3" 00:17:37.032 ], 00:17:37.032 "product_name": "Raid Volume", 00:17:37.032 "block_size": 512, 00:17:37.032 "num_blocks": 196608, 00:17:37.032 "uuid": "290613ac-9643-48c2-8ba8-3d015386afa3", 00:17:37.032 "assigned_rate_limits": { 00:17:37.032 "rw_ios_per_sec": 0, 00:17:37.032 "rw_mbytes_per_sec": 0, 00:17:37.032 "r_mbytes_per_sec": 0, 00:17:37.032 "w_mbytes_per_sec": 0 00:17:37.032 }, 00:17:37.032 "claimed": false, 00:17:37.032 "zoned": false, 00:17:37.032 "supported_io_types": { 00:17:37.032 "read": true, 00:17:37.032 "write": true, 00:17:37.032 "unmap": false, 00:17:37.032 "flush": false, 00:17:37.032 "reset": true, 00:17:37.032 "nvme_admin": false, 00:17:37.032 "nvme_io": false, 00:17:37.032 "nvme_io_md": false, 00:17:37.032 "write_zeroes": true, 00:17:37.032 "zcopy": false, 00:17:37.032 "get_zone_info": false, 00:17:37.032 "zone_management": false, 00:17:37.032 "zone_append": false, 00:17:37.032 "compare": false, 00:17:37.032 "compare_and_write": false, 00:17:37.032 "abort": false, 00:17:37.032 "seek_hole": false, 00:17:37.032 "seek_data": false, 00:17:37.032 "copy": false, 00:17:37.032 "nvme_iov_md": false 00:17:37.032 }, 00:17:37.032 "driver_specific": { 00:17:37.032 "raid": { 00:17:37.032 "uuid": "290613ac-9643-48c2-8ba8-3d015386afa3", 00:17:37.032 "strip_size_kb": 64, 00:17:37.032 "state": "online", 00:17:37.032 "raid_level": "raid5f", 00:17:37.032 "superblock": false, 00:17:37.032 "num_base_bdevs": 4, 00:17:37.032 "num_base_bdevs_discovered": 4, 00:17:37.033 "num_base_bdevs_operational": 4, 00:17:37.033 "base_bdevs_list": [ 00:17:37.033 { 00:17:37.033 "name": "BaseBdev1", 00:17:37.033 "uuid": "5bb16d24-d247-4d36-a4ed-14a0662786b3", 00:17:37.033 "is_configured": true, 00:17:37.033 "data_offset": 0, 00:17:37.033 "data_size": 65536 00:17:37.033 }, 00:17:37.033 { 00:17:37.033 "name": "BaseBdev2", 00:17:37.033 "uuid": "093e9574-f169-4818-b058-82ff7aac9e2d", 00:17:37.033 "is_configured": true, 00:17:37.033 "data_offset": 0, 00:17:37.033 "data_size": 65536 00:17:37.033 }, 00:17:37.033 { 00:17:37.033 "name": "BaseBdev3", 00:17:37.033 "uuid": "e6386b32-2ebf-42e9-a4d5-ad5a74f7ee90", 00:17:37.033 "is_configured": true, 00:17:37.033 "data_offset": 0, 00:17:37.033 "data_size": 65536 00:17:37.033 }, 00:17:37.033 { 00:17:37.033 "name": "BaseBdev4", 00:17:37.033 "uuid": "9a411b27-a2f5-43c7-8054-b4db65a36f03", 00:17:37.033 "is_configured": true, 00:17:37.033 "data_offset": 0, 00:17:37.033 "data_size": 65536 00:17:37.033 } 00:17:37.033 ] 00:17:37.033 } 00:17:37.033 } 00:17:37.033 }' 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:37.033 BaseBdev2 00:17:37.033 BaseBdev3 00:17:37.033 BaseBdev4' 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.033 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.292 11:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.292 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.292 [2024-11-15 11:29:20.193727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.551 "name": "Existed_Raid", 00:17:37.551 "uuid": "290613ac-9643-48c2-8ba8-3d015386afa3", 00:17:37.551 "strip_size_kb": 64, 00:17:37.551 "state": "online", 00:17:37.551 "raid_level": "raid5f", 00:17:37.551 "superblock": false, 00:17:37.551 "num_base_bdevs": 4, 00:17:37.551 "num_base_bdevs_discovered": 3, 00:17:37.551 "num_base_bdevs_operational": 3, 00:17:37.551 "base_bdevs_list": [ 00:17:37.551 { 00:17:37.551 "name": null, 00:17:37.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.551 "is_configured": false, 00:17:37.551 "data_offset": 0, 00:17:37.551 "data_size": 65536 00:17:37.551 }, 00:17:37.551 { 00:17:37.551 "name": "BaseBdev2", 00:17:37.551 "uuid": "093e9574-f169-4818-b058-82ff7aac9e2d", 00:17:37.551 "is_configured": true, 00:17:37.551 "data_offset": 0, 00:17:37.551 "data_size": 65536 00:17:37.551 }, 00:17:37.551 { 00:17:37.551 "name": "BaseBdev3", 00:17:37.551 "uuid": "e6386b32-2ebf-42e9-a4d5-ad5a74f7ee90", 00:17:37.551 "is_configured": true, 00:17:37.551 "data_offset": 0, 00:17:37.551 "data_size": 65536 00:17:37.551 }, 00:17:37.551 { 00:17:37.551 "name": "BaseBdev4", 00:17:37.551 "uuid": "9a411b27-a2f5-43c7-8054-b4db65a36f03", 00:17:37.551 "is_configured": true, 00:17:37.551 "data_offset": 0, 00:17:37.551 "data_size": 65536 00:17:37.551 } 00:17:37.551 ] 00:17:37.551 }' 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.551 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.119 [2024-11-15 11:29:20.815361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.119 [2024-11-15 11:29:20.815530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.119 [2024-11-15 11:29:20.889881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.119 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.120 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.120 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.120 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.120 11:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:38.120 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.120 11:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.120 [2024-11-15 11:29:20.949918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:38.120 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.120 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.120 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.120 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.120 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.120 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.120 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.120 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.379 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.379 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.379 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:38.379 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.379 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.379 [2024-11-15 11:29:21.090379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:38.380 [2024-11-15 11:29:21.090515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.380 BaseBdev2 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.380 [ 00:17:38.380 { 00:17:38.380 "name": "BaseBdev2", 00:17:38.380 "aliases": [ 00:17:38.380 "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c" 00:17:38.380 ], 00:17:38.380 "product_name": "Malloc disk", 00:17:38.380 "block_size": 512, 00:17:38.380 "num_blocks": 65536, 00:17:38.380 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:38.380 "assigned_rate_limits": { 00:17:38.380 "rw_ios_per_sec": 0, 00:17:38.380 "rw_mbytes_per_sec": 0, 00:17:38.380 "r_mbytes_per_sec": 0, 00:17:38.380 "w_mbytes_per_sec": 0 00:17:38.380 }, 00:17:38.380 "claimed": false, 00:17:38.380 "zoned": false, 00:17:38.380 "supported_io_types": { 00:17:38.380 "read": true, 00:17:38.380 "write": true, 00:17:38.380 "unmap": true, 00:17:38.380 "flush": true, 00:17:38.380 "reset": true, 00:17:38.380 "nvme_admin": false, 00:17:38.380 "nvme_io": false, 00:17:38.380 "nvme_io_md": false, 00:17:38.380 "write_zeroes": true, 00:17:38.380 "zcopy": true, 00:17:38.380 "get_zone_info": false, 00:17:38.380 "zone_management": false, 00:17:38.380 "zone_append": false, 00:17:38.380 "compare": false, 00:17:38.380 "compare_and_write": false, 00:17:38.380 "abort": true, 00:17:38.380 "seek_hole": false, 00:17:38.380 "seek_data": false, 00:17:38.380 "copy": true, 00:17:38.380 "nvme_iov_md": false 00:17:38.380 }, 00:17:38.380 "memory_domains": [ 00:17:38.380 { 00:17:38.380 "dma_device_id": "system", 00:17:38.380 "dma_device_type": 1 00:17:38.380 }, 00:17:38.380 { 00:17:38.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.380 "dma_device_type": 2 00:17:38.380 } 00:17:38.380 ], 00:17:38.380 "driver_specific": {} 00:17:38.380 } 00:17:38.380 ] 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.380 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.640 BaseBdev3 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.640 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.640 [ 00:17:38.640 { 00:17:38.640 "name": "BaseBdev3", 00:17:38.640 "aliases": [ 00:17:38.640 "1ce91b54-a0b9-48a0-859e-1f900fdc94ae" 00:17:38.640 ], 00:17:38.640 "product_name": "Malloc disk", 00:17:38.640 "block_size": 512, 00:17:38.640 "num_blocks": 65536, 00:17:38.640 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:38.640 "assigned_rate_limits": { 00:17:38.640 "rw_ios_per_sec": 0, 00:17:38.640 "rw_mbytes_per_sec": 0, 00:17:38.640 "r_mbytes_per_sec": 0, 00:17:38.640 "w_mbytes_per_sec": 0 00:17:38.640 }, 00:17:38.640 "claimed": false, 00:17:38.640 "zoned": false, 00:17:38.640 "supported_io_types": { 00:17:38.640 "read": true, 00:17:38.640 "write": true, 00:17:38.640 "unmap": true, 00:17:38.640 "flush": true, 00:17:38.640 "reset": true, 00:17:38.640 "nvme_admin": false, 00:17:38.640 "nvme_io": false, 00:17:38.640 "nvme_io_md": false, 00:17:38.640 "write_zeroes": true, 00:17:38.640 "zcopy": true, 00:17:38.640 "get_zone_info": false, 00:17:38.640 "zone_management": false, 00:17:38.640 "zone_append": false, 00:17:38.640 "compare": false, 00:17:38.640 "compare_and_write": false, 00:17:38.640 "abort": true, 00:17:38.640 "seek_hole": false, 00:17:38.640 "seek_data": false, 00:17:38.640 "copy": true, 00:17:38.640 "nvme_iov_md": false 00:17:38.640 }, 00:17:38.640 "memory_domains": [ 00:17:38.640 { 00:17:38.640 "dma_device_id": "system", 00:17:38.640 "dma_device_type": 1 00:17:38.640 }, 00:17:38.640 { 00:17:38.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.640 "dma_device_type": 2 00:17:38.640 } 00:17:38.641 ], 00:17:38.641 "driver_specific": {} 00:17:38.641 } 00:17:38.641 ] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.641 BaseBdev4 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.641 [ 00:17:38.641 { 00:17:38.641 "name": "BaseBdev4", 00:17:38.641 "aliases": [ 00:17:38.641 "852c03b3-ad4d-4855-899f-06af68999178" 00:17:38.641 ], 00:17:38.641 "product_name": "Malloc disk", 00:17:38.641 "block_size": 512, 00:17:38.641 "num_blocks": 65536, 00:17:38.641 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:38.641 "assigned_rate_limits": { 00:17:38.641 "rw_ios_per_sec": 0, 00:17:38.641 "rw_mbytes_per_sec": 0, 00:17:38.641 "r_mbytes_per_sec": 0, 00:17:38.641 "w_mbytes_per_sec": 0 00:17:38.641 }, 00:17:38.641 "claimed": false, 00:17:38.641 "zoned": false, 00:17:38.641 "supported_io_types": { 00:17:38.641 "read": true, 00:17:38.641 "write": true, 00:17:38.641 "unmap": true, 00:17:38.641 "flush": true, 00:17:38.641 "reset": true, 00:17:38.641 "nvme_admin": false, 00:17:38.641 "nvme_io": false, 00:17:38.641 "nvme_io_md": false, 00:17:38.641 "write_zeroes": true, 00:17:38.641 "zcopy": true, 00:17:38.641 "get_zone_info": false, 00:17:38.641 "zone_management": false, 00:17:38.641 "zone_append": false, 00:17:38.641 "compare": false, 00:17:38.641 "compare_and_write": false, 00:17:38.641 "abort": true, 00:17:38.641 "seek_hole": false, 00:17:38.641 "seek_data": false, 00:17:38.641 "copy": true, 00:17:38.641 "nvme_iov_md": false 00:17:38.641 }, 00:17:38.641 "memory_domains": [ 00:17:38.641 { 00:17:38.641 "dma_device_id": "system", 00:17:38.641 "dma_device_type": 1 00:17:38.641 }, 00:17:38.641 { 00:17:38.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.641 "dma_device_type": 2 00:17:38.641 } 00:17:38.641 ], 00:17:38.641 "driver_specific": {} 00:17:38.641 } 00:17:38.641 ] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.641 [2024-11-15 11:29:21.433319] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.641 [2024-11-15 11:29:21.433389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.641 [2024-11-15 11:29:21.433420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.641 [2024-11-15 11:29:21.435909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.641 [2024-11-15 11:29:21.435993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.641 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.641 "name": "Existed_Raid", 00:17:38.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.641 "strip_size_kb": 64, 00:17:38.641 "state": "configuring", 00:17:38.641 "raid_level": "raid5f", 00:17:38.641 "superblock": false, 00:17:38.641 "num_base_bdevs": 4, 00:17:38.641 "num_base_bdevs_discovered": 3, 00:17:38.641 "num_base_bdevs_operational": 4, 00:17:38.641 "base_bdevs_list": [ 00:17:38.641 { 00:17:38.641 "name": "BaseBdev1", 00:17:38.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.641 "is_configured": false, 00:17:38.641 "data_offset": 0, 00:17:38.641 "data_size": 0 00:17:38.641 }, 00:17:38.641 { 00:17:38.641 "name": "BaseBdev2", 00:17:38.641 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:38.641 "is_configured": true, 00:17:38.641 "data_offset": 0, 00:17:38.641 "data_size": 65536 00:17:38.641 }, 00:17:38.641 { 00:17:38.641 "name": "BaseBdev3", 00:17:38.641 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:38.641 "is_configured": true, 00:17:38.641 "data_offset": 0, 00:17:38.641 "data_size": 65536 00:17:38.641 }, 00:17:38.641 { 00:17:38.641 "name": "BaseBdev4", 00:17:38.641 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:38.641 "is_configured": true, 00:17:38.642 "data_offset": 0, 00:17:38.642 "data_size": 65536 00:17:38.642 } 00:17:38.642 ] 00:17:38.642 }' 00:17:38.642 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.642 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.211 [2024-11-15 11:29:21.929564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.211 "name": "Existed_Raid", 00:17:39.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.211 "strip_size_kb": 64, 00:17:39.211 "state": "configuring", 00:17:39.211 "raid_level": "raid5f", 00:17:39.211 "superblock": false, 00:17:39.211 "num_base_bdevs": 4, 00:17:39.211 "num_base_bdevs_discovered": 2, 00:17:39.211 "num_base_bdevs_operational": 4, 00:17:39.211 "base_bdevs_list": [ 00:17:39.211 { 00:17:39.211 "name": "BaseBdev1", 00:17:39.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.211 "is_configured": false, 00:17:39.211 "data_offset": 0, 00:17:39.211 "data_size": 0 00:17:39.211 }, 00:17:39.211 { 00:17:39.211 "name": null, 00:17:39.211 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:39.211 "is_configured": false, 00:17:39.211 "data_offset": 0, 00:17:39.211 "data_size": 65536 00:17:39.211 }, 00:17:39.211 { 00:17:39.211 "name": "BaseBdev3", 00:17:39.211 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:39.211 "is_configured": true, 00:17:39.211 "data_offset": 0, 00:17:39.211 "data_size": 65536 00:17:39.211 }, 00:17:39.211 { 00:17:39.211 "name": "BaseBdev4", 00:17:39.211 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:39.211 "is_configured": true, 00:17:39.211 "data_offset": 0, 00:17:39.211 "data_size": 65536 00:17:39.211 } 00:17:39.211 ] 00:17:39.211 }' 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.211 11:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.780 [2024-11-15 11:29:22.531896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.780 BaseBdev1 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.780 [ 00:17:39.780 { 00:17:39.780 "name": "BaseBdev1", 00:17:39.780 "aliases": [ 00:17:39.780 "8c4216fc-4974-431d-9d1f-c02466f58486" 00:17:39.780 ], 00:17:39.780 "product_name": "Malloc disk", 00:17:39.780 "block_size": 512, 00:17:39.780 "num_blocks": 65536, 00:17:39.780 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:39.780 "assigned_rate_limits": { 00:17:39.780 "rw_ios_per_sec": 0, 00:17:39.780 "rw_mbytes_per_sec": 0, 00:17:39.780 "r_mbytes_per_sec": 0, 00:17:39.780 "w_mbytes_per_sec": 0 00:17:39.780 }, 00:17:39.780 "claimed": true, 00:17:39.780 "claim_type": "exclusive_write", 00:17:39.780 "zoned": false, 00:17:39.780 "supported_io_types": { 00:17:39.780 "read": true, 00:17:39.780 "write": true, 00:17:39.780 "unmap": true, 00:17:39.780 "flush": true, 00:17:39.780 "reset": true, 00:17:39.780 "nvme_admin": false, 00:17:39.780 "nvme_io": false, 00:17:39.780 "nvme_io_md": false, 00:17:39.780 "write_zeroes": true, 00:17:39.780 "zcopy": true, 00:17:39.780 "get_zone_info": false, 00:17:39.780 "zone_management": false, 00:17:39.780 "zone_append": false, 00:17:39.780 "compare": false, 00:17:39.780 "compare_and_write": false, 00:17:39.780 "abort": true, 00:17:39.780 "seek_hole": false, 00:17:39.780 "seek_data": false, 00:17:39.780 "copy": true, 00:17:39.780 "nvme_iov_md": false 00:17:39.780 }, 00:17:39.780 "memory_domains": [ 00:17:39.780 { 00:17:39.780 "dma_device_id": "system", 00:17:39.780 "dma_device_type": 1 00:17:39.780 }, 00:17:39.780 { 00:17:39.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.780 "dma_device_type": 2 00:17:39.780 } 00:17:39.780 ], 00:17:39.780 "driver_specific": {} 00:17:39.780 } 00:17:39.780 ] 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.780 "name": "Existed_Raid", 00:17:39.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.780 "strip_size_kb": 64, 00:17:39.780 "state": "configuring", 00:17:39.780 "raid_level": "raid5f", 00:17:39.780 "superblock": false, 00:17:39.780 "num_base_bdevs": 4, 00:17:39.780 "num_base_bdevs_discovered": 3, 00:17:39.780 "num_base_bdevs_operational": 4, 00:17:39.780 "base_bdevs_list": [ 00:17:39.780 { 00:17:39.780 "name": "BaseBdev1", 00:17:39.780 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:39.780 "is_configured": true, 00:17:39.780 "data_offset": 0, 00:17:39.780 "data_size": 65536 00:17:39.780 }, 00:17:39.780 { 00:17:39.780 "name": null, 00:17:39.780 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:39.780 "is_configured": false, 00:17:39.780 "data_offset": 0, 00:17:39.780 "data_size": 65536 00:17:39.780 }, 00:17:39.780 { 00:17:39.780 "name": "BaseBdev3", 00:17:39.780 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:39.780 "is_configured": true, 00:17:39.780 "data_offset": 0, 00:17:39.780 "data_size": 65536 00:17:39.780 }, 00:17:39.780 { 00:17:39.780 "name": "BaseBdev4", 00:17:39.780 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:39.780 "is_configured": true, 00:17:39.780 "data_offset": 0, 00:17:39.780 "data_size": 65536 00:17:39.780 } 00:17:39.780 ] 00:17:39.780 }' 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.780 11:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.349 [2024-11-15 11:29:23.116095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:40.349 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.350 "name": "Existed_Raid", 00:17:40.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.350 "strip_size_kb": 64, 00:17:40.350 "state": "configuring", 00:17:40.350 "raid_level": "raid5f", 00:17:40.350 "superblock": false, 00:17:40.350 "num_base_bdevs": 4, 00:17:40.350 "num_base_bdevs_discovered": 2, 00:17:40.350 "num_base_bdevs_operational": 4, 00:17:40.350 "base_bdevs_list": [ 00:17:40.350 { 00:17:40.350 "name": "BaseBdev1", 00:17:40.350 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:40.350 "is_configured": true, 00:17:40.350 "data_offset": 0, 00:17:40.350 "data_size": 65536 00:17:40.350 }, 00:17:40.350 { 00:17:40.350 "name": null, 00:17:40.350 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:40.350 "is_configured": false, 00:17:40.350 "data_offset": 0, 00:17:40.350 "data_size": 65536 00:17:40.350 }, 00:17:40.350 { 00:17:40.350 "name": null, 00:17:40.350 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:40.350 "is_configured": false, 00:17:40.350 "data_offset": 0, 00:17:40.350 "data_size": 65536 00:17:40.350 }, 00:17:40.350 { 00:17:40.350 "name": "BaseBdev4", 00:17:40.350 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:40.350 "is_configured": true, 00:17:40.350 "data_offset": 0, 00:17:40.350 "data_size": 65536 00:17:40.350 } 00:17:40.350 ] 00:17:40.350 }' 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.350 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.919 [2024-11-15 11:29:23.668391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.919 "name": "Existed_Raid", 00:17:40.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.919 "strip_size_kb": 64, 00:17:40.919 "state": "configuring", 00:17:40.919 "raid_level": "raid5f", 00:17:40.919 "superblock": false, 00:17:40.919 "num_base_bdevs": 4, 00:17:40.919 "num_base_bdevs_discovered": 3, 00:17:40.919 "num_base_bdevs_operational": 4, 00:17:40.919 "base_bdevs_list": [ 00:17:40.919 { 00:17:40.919 "name": "BaseBdev1", 00:17:40.919 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:40.919 "is_configured": true, 00:17:40.919 "data_offset": 0, 00:17:40.919 "data_size": 65536 00:17:40.919 }, 00:17:40.919 { 00:17:40.919 "name": null, 00:17:40.919 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:40.919 "is_configured": false, 00:17:40.919 "data_offset": 0, 00:17:40.919 "data_size": 65536 00:17:40.919 }, 00:17:40.919 { 00:17:40.919 "name": "BaseBdev3", 00:17:40.919 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:40.919 "is_configured": true, 00:17:40.919 "data_offset": 0, 00:17:40.919 "data_size": 65536 00:17:40.919 }, 00:17:40.919 { 00:17:40.919 "name": "BaseBdev4", 00:17:40.919 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:40.919 "is_configured": true, 00:17:40.919 "data_offset": 0, 00:17:40.919 "data_size": 65536 00:17:40.919 } 00:17:40.919 ] 00:17:40.919 }' 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.919 11:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.488 [2024-11-15 11:29:24.240655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.488 "name": "Existed_Raid", 00:17:41.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.488 "strip_size_kb": 64, 00:17:41.488 "state": "configuring", 00:17:41.488 "raid_level": "raid5f", 00:17:41.488 "superblock": false, 00:17:41.488 "num_base_bdevs": 4, 00:17:41.488 "num_base_bdevs_discovered": 2, 00:17:41.488 "num_base_bdevs_operational": 4, 00:17:41.488 "base_bdevs_list": [ 00:17:41.488 { 00:17:41.488 "name": null, 00:17:41.488 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:41.488 "is_configured": false, 00:17:41.488 "data_offset": 0, 00:17:41.488 "data_size": 65536 00:17:41.488 }, 00:17:41.488 { 00:17:41.488 "name": null, 00:17:41.488 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:41.488 "is_configured": false, 00:17:41.488 "data_offset": 0, 00:17:41.488 "data_size": 65536 00:17:41.488 }, 00:17:41.488 { 00:17:41.488 "name": "BaseBdev3", 00:17:41.488 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:41.488 "is_configured": true, 00:17:41.488 "data_offset": 0, 00:17:41.488 "data_size": 65536 00:17:41.488 }, 00:17:41.488 { 00:17:41.488 "name": "BaseBdev4", 00:17:41.488 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:41.488 "is_configured": true, 00:17:41.488 "data_offset": 0, 00:17:41.488 "data_size": 65536 00:17:41.488 } 00:17:41.488 ] 00:17:41.488 }' 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.488 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.056 [2024-11-15 11:29:24.923258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.056 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.057 "name": "Existed_Raid", 00:17:42.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.057 "strip_size_kb": 64, 00:17:42.057 "state": "configuring", 00:17:42.057 "raid_level": "raid5f", 00:17:42.057 "superblock": false, 00:17:42.057 "num_base_bdevs": 4, 00:17:42.057 "num_base_bdevs_discovered": 3, 00:17:42.057 "num_base_bdevs_operational": 4, 00:17:42.057 "base_bdevs_list": [ 00:17:42.057 { 00:17:42.057 "name": null, 00:17:42.057 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:42.057 "is_configured": false, 00:17:42.057 "data_offset": 0, 00:17:42.057 "data_size": 65536 00:17:42.057 }, 00:17:42.057 { 00:17:42.057 "name": "BaseBdev2", 00:17:42.057 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:42.057 "is_configured": true, 00:17:42.057 "data_offset": 0, 00:17:42.057 "data_size": 65536 00:17:42.057 }, 00:17:42.057 { 00:17:42.057 "name": "BaseBdev3", 00:17:42.057 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:42.057 "is_configured": true, 00:17:42.057 "data_offset": 0, 00:17:42.057 "data_size": 65536 00:17:42.057 }, 00:17:42.057 { 00:17:42.057 "name": "BaseBdev4", 00:17:42.057 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:42.057 "is_configured": true, 00:17:42.057 "data_offset": 0, 00:17:42.057 "data_size": 65536 00:17:42.057 } 00:17:42.057 ] 00:17:42.057 }' 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.057 11:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.624 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:42.625 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.625 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8c4216fc-4974-431d-9d1f-c02466f58486 00:17:42.625 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.625 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.884 [2024-11-15 11:29:25.589724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:42.884 [2024-11-15 11:29:25.589789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:42.884 [2024-11-15 11:29:25.589801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:42.884 [2024-11-15 11:29:25.590223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:42.884 [2024-11-15 11:29:25.596697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:42.884 [2024-11-15 11:29:25.596726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:42.884 [2024-11-15 11:29:25.597079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.884 NewBaseBdev 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.884 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.884 [ 00:17:42.884 { 00:17:42.884 "name": "NewBaseBdev", 00:17:42.884 "aliases": [ 00:17:42.884 "8c4216fc-4974-431d-9d1f-c02466f58486" 00:17:42.884 ], 00:17:42.884 "product_name": "Malloc disk", 00:17:42.884 "block_size": 512, 00:17:42.885 "num_blocks": 65536, 00:17:42.885 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:42.885 "assigned_rate_limits": { 00:17:42.885 "rw_ios_per_sec": 0, 00:17:42.885 "rw_mbytes_per_sec": 0, 00:17:42.885 "r_mbytes_per_sec": 0, 00:17:42.885 "w_mbytes_per_sec": 0 00:17:42.885 }, 00:17:42.885 "claimed": true, 00:17:42.885 "claim_type": "exclusive_write", 00:17:42.885 "zoned": false, 00:17:42.885 "supported_io_types": { 00:17:42.885 "read": true, 00:17:42.885 "write": true, 00:17:42.885 "unmap": true, 00:17:42.885 "flush": true, 00:17:42.885 "reset": true, 00:17:42.885 "nvme_admin": false, 00:17:42.885 "nvme_io": false, 00:17:42.885 "nvme_io_md": false, 00:17:42.885 "write_zeroes": true, 00:17:42.885 "zcopy": true, 00:17:42.885 "get_zone_info": false, 00:17:42.885 "zone_management": false, 00:17:42.885 "zone_append": false, 00:17:42.885 "compare": false, 00:17:42.885 "compare_and_write": false, 00:17:42.885 "abort": true, 00:17:42.885 "seek_hole": false, 00:17:42.885 "seek_data": false, 00:17:42.885 "copy": true, 00:17:42.885 "nvme_iov_md": false 00:17:42.885 }, 00:17:42.885 "memory_domains": [ 00:17:42.885 { 00:17:42.885 "dma_device_id": "system", 00:17:42.885 "dma_device_type": 1 00:17:42.885 }, 00:17:42.885 { 00:17:42.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.885 "dma_device_type": 2 00:17:42.885 } 00:17:42.885 ], 00:17:42.885 "driver_specific": {} 00:17:42.885 } 00:17:42.885 ] 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.885 "name": "Existed_Raid", 00:17:42.885 "uuid": "c67d07d5-2f09-4b1b-b7bc-4d158642c20a", 00:17:42.885 "strip_size_kb": 64, 00:17:42.885 "state": "online", 00:17:42.885 "raid_level": "raid5f", 00:17:42.885 "superblock": false, 00:17:42.885 "num_base_bdevs": 4, 00:17:42.885 "num_base_bdevs_discovered": 4, 00:17:42.885 "num_base_bdevs_operational": 4, 00:17:42.885 "base_bdevs_list": [ 00:17:42.885 { 00:17:42.885 "name": "NewBaseBdev", 00:17:42.885 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:42.885 "is_configured": true, 00:17:42.885 "data_offset": 0, 00:17:42.885 "data_size": 65536 00:17:42.885 }, 00:17:42.885 { 00:17:42.885 "name": "BaseBdev2", 00:17:42.885 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:42.885 "is_configured": true, 00:17:42.885 "data_offset": 0, 00:17:42.885 "data_size": 65536 00:17:42.885 }, 00:17:42.885 { 00:17:42.885 "name": "BaseBdev3", 00:17:42.885 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:42.885 "is_configured": true, 00:17:42.885 "data_offset": 0, 00:17:42.885 "data_size": 65536 00:17:42.885 }, 00:17:42.885 { 00:17:42.885 "name": "BaseBdev4", 00:17:42.885 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:42.885 "is_configured": true, 00:17:42.885 "data_offset": 0, 00:17:42.885 "data_size": 65536 00:17:42.885 } 00:17:42.885 ] 00:17:42.885 }' 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.885 11:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.454 [2024-11-15 11:29:26.141474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.454 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.454 "name": "Existed_Raid", 00:17:43.454 "aliases": [ 00:17:43.454 "c67d07d5-2f09-4b1b-b7bc-4d158642c20a" 00:17:43.454 ], 00:17:43.454 "product_name": "Raid Volume", 00:17:43.454 "block_size": 512, 00:17:43.454 "num_blocks": 196608, 00:17:43.454 "uuid": "c67d07d5-2f09-4b1b-b7bc-4d158642c20a", 00:17:43.454 "assigned_rate_limits": { 00:17:43.454 "rw_ios_per_sec": 0, 00:17:43.454 "rw_mbytes_per_sec": 0, 00:17:43.454 "r_mbytes_per_sec": 0, 00:17:43.454 "w_mbytes_per_sec": 0 00:17:43.454 }, 00:17:43.454 "claimed": false, 00:17:43.454 "zoned": false, 00:17:43.454 "supported_io_types": { 00:17:43.454 "read": true, 00:17:43.454 "write": true, 00:17:43.454 "unmap": false, 00:17:43.454 "flush": false, 00:17:43.454 "reset": true, 00:17:43.454 "nvme_admin": false, 00:17:43.454 "nvme_io": false, 00:17:43.454 "nvme_io_md": false, 00:17:43.454 "write_zeroes": true, 00:17:43.454 "zcopy": false, 00:17:43.454 "get_zone_info": false, 00:17:43.454 "zone_management": false, 00:17:43.454 "zone_append": false, 00:17:43.454 "compare": false, 00:17:43.454 "compare_and_write": false, 00:17:43.454 "abort": false, 00:17:43.454 "seek_hole": false, 00:17:43.454 "seek_data": false, 00:17:43.454 "copy": false, 00:17:43.454 "nvme_iov_md": false 00:17:43.454 }, 00:17:43.454 "driver_specific": { 00:17:43.454 "raid": { 00:17:43.454 "uuid": "c67d07d5-2f09-4b1b-b7bc-4d158642c20a", 00:17:43.454 "strip_size_kb": 64, 00:17:43.454 "state": "online", 00:17:43.454 "raid_level": "raid5f", 00:17:43.455 "superblock": false, 00:17:43.455 "num_base_bdevs": 4, 00:17:43.455 "num_base_bdevs_discovered": 4, 00:17:43.455 "num_base_bdevs_operational": 4, 00:17:43.455 "base_bdevs_list": [ 00:17:43.455 { 00:17:43.455 "name": "NewBaseBdev", 00:17:43.455 "uuid": "8c4216fc-4974-431d-9d1f-c02466f58486", 00:17:43.455 "is_configured": true, 00:17:43.455 "data_offset": 0, 00:17:43.455 "data_size": 65536 00:17:43.455 }, 00:17:43.455 { 00:17:43.455 "name": "BaseBdev2", 00:17:43.455 "uuid": "34e7bfd2-d976-48c3-8767-a0e9dc0bed1c", 00:17:43.455 "is_configured": true, 00:17:43.455 "data_offset": 0, 00:17:43.455 "data_size": 65536 00:17:43.455 }, 00:17:43.455 { 00:17:43.455 "name": "BaseBdev3", 00:17:43.455 "uuid": "1ce91b54-a0b9-48a0-859e-1f900fdc94ae", 00:17:43.455 "is_configured": true, 00:17:43.455 "data_offset": 0, 00:17:43.455 "data_size": 65536 00:17:43.455 }, 00:17:43.455 { 00:17:43.455 "name": "BaseBdev4", 00:17:43.455 "uuid": "852c03b3-ad4d-4855-899f-06af68999178", 00:17:43.455 "is_configured": true, 00:17:43.455 "data_offset": 0, 00:17:43.455 "data_size": 65536 00:17:43.455 } 00:17:43.455 ] 00:17:43.455 } 00:17:43.455 } 00:17:43.455 }' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:43.455 BaseBdev2 00:17:43.455 BaseBdev3 00:17:43.455 BaseBdev4' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.455 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.714 [2024-11-15 11:29:26.501298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.714 [2024-11-15 11:29:26.501355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.714 [2024-11-15 11:29:26.501450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.714 [2024-11-15 11:29:26.501896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.714 [2024-11-15 11:29:26.501923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83036 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83036 ']' 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83036 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83036 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:43.714 killing process with pid 83036 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83036' 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83036 00:17:43.714 [2024-11-15 11:29:26.542671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.714 11:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83036 00:17:43.974 [2024-11-15 11:29:26.852763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:45.353 00:17:45.353 real 0m12.659s 00:17:45.353 user 0m20.803s 00:17:45.353 sys 0m2.009s 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.353 ************************************ 00:17:45.353 END TEST raid5f_state_function_test 00:17:45.353 ************************************ 00:17:45.353 11:29:27 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:45.353 11:29:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:45.353 11:29:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:45.353 11:29:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.353 ************************************ 00:17:45.353 START TEST raid5f_state_function_test_sb 00:17:45.353 ************************************ 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.353 11:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83712 00:17:45.353 Process raid pid: 83712 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83712' 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83712 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83712 ']' 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:45.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:45.353 11:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.353 [2024-11-15 11:29:28.123412] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:17:45.353 [2024-11-15 11:29:28.123608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.613 [2024-11-15 11:29:28.311859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.613 [2024-11-15 11:29:28.455363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.872 [2024-11-15 11:29:28.665253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.872 [2024-11-15 11:29:28.665319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.441 [2024-11-15 11:29:29.109059] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.441 [2024-11-15 11:29:29.109139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.441 [2024-11-15 11:29:29.109154] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.441 [2024-11-15 11:29:29.109214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.441 [2024-11-15 11:29:29.109227] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.441 [2024-11-15 11:29:29.109257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.441 [2024-11-15 11:29:29.109267] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:46.441 [2024-11-15 11:29:29.109281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.441 "name": "Existed_Raid", 00:17:46.441 "uuid": "d2da2cc2-1a9c-409b-9226-a6e11c7bbb46", 00:17:46.441 "strip_size_kb": 64, 00:17:46.441 "state": "configuring", 00:17:46.441 "raid_level": "raid5f", 00:17:46.441 "superblock": true, 00:17:46.441 "num_base_bdevs": 4, 00:17:46.441 "num_base_bdevs_discovered": 0, 00:17:46.441 "num_base_bdevs_operational": 4, 00:17:46.441 "base_bdevs_list": [ 00:17:46.441 { 00:17:46.441 "name": "BaseBdev1", 00:17:46.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.441 "is_configured": false, 00:17:46.441 "data_offset": 0, 00:17:46.441 "data_size": 0 00:17:46.441 }, 00:17:46.441 { 00:17:46.441 "name": "BaseBdev2", 00:17:46.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.441 "is_configured": false, 00:17:46.441 "data_offset": 0, 00:17:46.441 "data_size": 0 00:17:46.441 }, 00:17:46.441 { 00:17:46.441 "name": "BaseBdev3", 00:17:46.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.441 "is_configured": false, 00:17:46.441 "data_offset": 0, 00:17:46.441 "data_size": 0 00:17:46.441 }, 00:17:46.441 { 00:17:46.441 "name": "BaseBdev4", 00:17:46.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.441 "is_configured": false, 00:17:46.441 "data_offset": 0, 00:17:46.441 "data_size": 0 00:17:46.441 } 00:17:46.441 ] 00:17:46.441 }' 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.441 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.700 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.700 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.700 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.700 [2024-11-15 11:29:29.645339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.700 [2024-11-15 11:29:29.645392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.959 [2024-11-15 11:29:29.653351] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.959 [2024-11-15 11:29:29.653407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.959 [2024-11-15 11:29:29.653422] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.959 [2024-11-15 11:29:29.653444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.959 [2024-11-15 11:29:29.653454] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.959 [2024-11-15 11:29:29.653468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.959 [2024-11-15 11:29:29.653478] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:46.959 [2024-11-15 11:29:29.653492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.959 [2024-11-15 11:29:29.716945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.959 BaseBdev1 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.959 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.959 [ 00:17:46.959 { 00:17:46.959 "name": "BaseBdev1", 00:17:46.959 "aliases": [ 00:17:46.959 "27a1ef01-d814-49f3-b0f8-bfab93be056e" 00:17:46.959 ], 00:17:46.959 "product_name": "Malloc disk", 00:17:46.959 "block_size": 512, 00:17:46.959 "num_blocks": 65536, 00:17:46.959 "uuid": "27a1ef01-d814-49f3-b0f8-bfab93be056e", 00:17:46.959 "assigned_rate_limits": { 00:17:46.959 "rw_ios_per_sec": 0, 00:17:46.959 "rw_mbytes_per_sec": 0, 00:17:46.959 "r_mbytes_per_sec": 0, 00:17:46.959 "w_mbytes_per_sec": 0 00:17:46.959 }, 00:17:46.959 "claimed": true, 00:17:46.959 "claim_type": "exclusive_write", 00:17:46.959 "zoned": false, 00:17:46.959 "supported_io_types": { 00:17:46.959 "read": true, 00:17:46.959 "write": true, 00:17:46.960 "unmap": true, 00:17:46.960 "flush": true, 00:17:46.960 "reset": true, 00:17:46.960 "nvme_admin": false, 00:17:46.960 "nvme_io": false, 00:17:46.960 "nvme_io_md": false, 00:17:46.960 "write_zeroes": true, 00:17:46.960 "zcopy": true, 00:17:46.960 "get_zone_info": false, 00:17:46.960 "zone_management": false, 00:17:46.960 "zone_append": false, 00:17:46.960 "compare": false, 00:17:46.960 "compare_and_write": false, 00:17:46.960 "abort": true, 00:17:46.960 "seek_hole": false, 00:17:46.960 "seek_data": false, 00:17:46.960 "copy": true, 00:17:46.960 "nvme_iov_md": false 00:17:46.960 }, 00:17:46.960 "memory_domains": [ 00:17:46.960 { 00:17:46.960 "dma_device_id": "system", 00:17:46.960 "dma_device_type": 1 00:17:46.960 }, 00:17:46.960 { 00:17:46.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.960 "dma_device_type": 2 00:17:46.960 } 00:17:46.960 ], 00:17:46.960 "driver_specific": {} 00:17:46.960 } 00:17:46.960 ] 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.960 "name": "Existed_Raid", 00:17:46.960 "uuid": "9b959f21-b174-41b7-9e80-2785bf1d2957", 00:17:46.960 "strip_size_kb": 64, 00:17:46.960 "state": "configuring", 00:17:46.960 "raid_level": "raid5f", 00:17:46.960 "superblock": true, 00:17:46.960 "num_base_bdevs": 4, 00:17:46.960 "num_base_bdevs_discovered": 1, 00:17:46.960 "num_base_bdevs_operational": 4, 00:17:46.960 "base_bdevs_list": [ 00:17:46.960 { 00:17:46.960 "name": "BaseBdev1", 00:17:46.960 "uuid": "27a1ef01-d814-49f3-b0f8-bfab93be056e", 00:17:46.960 "is_configured": true, 00:17:46.960 "data_offset": 2048, 00:17:46.960 "data_size": 63488 00:17:46.960 }, 00:17:46.960 { 00:17:46.960 "name": "BaseBdev2", 00:17:46.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.960 "is_configured": false, 00:17:46.960 "data_offset": 0, 00:17:46.960 "data_size": 0 00:17:46.960 }, 00:17:46.960 { 00:17:46.960 "name": "BaseBdev3", 00:17:46.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.960 "is_configured": false, 00:17:46.960 "data_offset": 0, 00:17:46.960 "data_size": 0 00:17:46.960 }, 00:17:46.960 { 00:17:46.960 "name": "BaseBdev4", 00:17:46.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.960 "is_configured": false, 00:17:46.960 "data_offset": 0, 00:17:46.960 "data_size": 0 00:17:46.960 } 00:17:46.960 ] 00:17:46.960 }' 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.960 11:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.528 [2024-11-15 11:29:30.277294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.528 [2024-11-15 11:29:30.277383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.528 [2024-11-15 11:29:30.289424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.528 [2024-11-15 11:29:30.292260] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.528 [2024-11-15 11:29:30.292329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.528 [2024-11-15 11:29:30.292345] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.528 [2024-11-15 11:29:30.292362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.528 [2024-11-15 11:29:30.292372] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:47.528 [2024-11-15 11:29:30.292385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.528 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.529 "name": "Existed_Raid", 00:17:47.529 "uuid": "25f1f0b8-8463-40e8-b3d3-c9c5b2b91d7a", 00:17:47.529 "strip_size_kb": 64, 00:17:47.529 "state": "configuring", 00:17:47.529 "raid_level": "raid5f", 00:17:47.529 "superblock": true, 00:17:47.529 "num_base_bdevs": 4, 00:17:47.529 "num_base_bdevs_discovered": 1, 00:17:47.529 "num_base_bdevs_operational": 4, 00:17:47.529 "base_bdevs_list": [ 00:17:47.529 { 00:17:47.529 "name": "BaseBdev1", 00:17:47.529 "uuid": "27a1ef01-d814-49f3-b0f8-bfab93be056e", 00:17:47.529 "is_configured": true, 00:17:47.529 "data_offset": 2048, 00:17:47.529 "data_size": 63488 00:17:47.529 }, 00:17:47.529 { 00:17:47.529 "name": "BaseBdev2", 00:17:47.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.529 "is_configured": false, 00:17:47.529 "data_offset": 0, 00:17:47.529 "data_size": 0 00:17:47.529 }, 00:17:47.529 { 00:17:47.529 "name": "BaseBdev3", 00:17:47.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.529 "is_configured": false, 00:17:47.529 "data_offset": 0, 00:17:47.529 "data_size": 0 00:17:47.529 }, 00:17:47.529 { 00:17:47.529 "name": "BaseBdev4", 00:17:47.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.529 "is_configured": false, 00:17:47.529 "data_offset": 0, 00:17:47.529 "data_size": 0 00:17:47.529 } 00:17:47.529 ] 00:17:47.529 }' 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.529 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.098 [2024-11-15 11:29:30.860226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.098 BaseBdev2 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.098 [ 00:17:48.098 { 00:17:48.098 "name": "BaseBdev2", 00:17:48.098 "aliases": [ 00:17:48.098 "31ffac7d-913a-4d9c-8fdd-00b90434cfc0" 00:17:48.098 ], 00:17:48.098 "product_name": "Malloc disk", 00:17:48.098 "block_size": 512, 00:17:48.098 "num_blocks": 65536, 00:17:48.098 "uuid": "31ffac7d-913a-4d9c-8fdd-00b90434cfc0", 00:17:48.098 "assigned_rate_limits": { 00:17:48.098 "rw_ios_per_sec": 0, 00:17:48.098 "rw_mbytes_per_sec": 0, 00:17:48.098 "r_mbytes_per_sec": 0, 00:17:48.098 "w_mbytes_per_sec": 0 00:17:48.098 }, 00:17:48.098 "claimed": true, 00:17:48.098 "claim_type": "exclusive_write", 00:17:48.098 "zoned": false, 00:17:48.098 "supported_io_types": { 00:17:48.098 "read": true, 00:17:48.098 "write": true, 00:17:48.098 "unmap": true, 00:17:48.098 "flush": true, 00:17:48.098 "reset": true, 00:17:48.098 "nvme_admin": false, 00:17:48.098 "nvme_io": false, 00:17:48.098 "nvme_io_md": false, 00:17:48.098 "write_zeroes": true, 00:17:48.098 "zcopy": true, 00:17:48.098 "get_zone_info": false, 00:17:48.098 "zone_management": false, 00:17:48.098 "zone_append": false, 00:17:48.098 "compare": false, 00:17:48.098 "compare_and_write": false, 00:17:48.098 "abort": true, 00:17:48.098 "seek_hole": false, 00:17:48.098 "seek_data": false, 00:17:48.098 "copy": true, 00:17:48.098 "nvme_iov_md": false 00:17:48.098 }, 00:17:48.098 "memory_domains": [ 00:17:48.098 { 00:17:48.098 "dma_device_id": "system", 00:17:48.098 "dma_device_type": 1 00:17:48.098 }, 00:17:48.098 { 00:17:48.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.098 "dma_device_type": 2 00:17:48.098 } 00:17:48.098 ], 00:17:48.098 "driver_specific": {} 00:17:48.098 } 00:17:48.098 ] 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.098 "name": "Existed_Raid", 00:17:48.098 "uuid": "25f1f0b8-8463-40e8-b3d3-c9c5b2b91d7a", 00:17:48.098 "strip_size_kb": 64, 00:17:48.098 "state": "configuring", 00:17:48.098 "raid_level": "raid5f", 00:17:48.098 "superblock": true, 00:17:48.098 "num_base_bdevs": 4, 00:17:48.098 "num_base_bdevs_discovered": 2, 00:17:48.098 "num_base_bdevs_operational": 4, 00:17:48.098 "base_bdevs_list": [ 00:17:48.098 { 00:17:48.098 "name": "BaseBdev1", 00:17:48.098 "uuid": "27a1ef01-d814-49f3-b0f8-bfab93be056e", 00:17:48.098 "is_configured": true, 00:17:48.098 "data_offset": 2048, 00:17:48.098 "data_size": 63488 00:17:48.098 }, 00:17:48.098 { 00:17:48.098 "name": "BaseBdev2", 00:17:48.098 "uuid": "31ffac7d-913a-4d9c-8fdd-00b90434cfc0", 00:17:48.098 "is_configured": true, 00:17:48.098 "data_offset": 2048, 00:17:48.098 "data_size": 63488 00:17:48.098 }, 00:17:48.098 { 00:17:48.098 "name": "BaseBdev3", 00:17:48.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.098 "is_configured": false, 00:17:48.098 "data_offset": 0, 00:17:48.098 "data_size": 0 00:17:48.098 }, 00:17:48.098 { 00:17:48.098 "name": "BaseBdev4", 00:17:48.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.098 "is_configured": false, 00:17:48.098 "data_offset": 0, 00:17:48.098 "data_size": 0 00:17:48.098 } 00:17:48.098 ] 00:17:48.098 }' 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.098 11:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.668 [2024-11-15 11:29:31.445826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.668 BaseBdev3 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.668 [ 00:17:48.668 { 00:17:48.668 "name": "BaseBdev3", 00:17:48.668 "aliases": [ 00:17:48.668 "da6e8c96-7082-42f4-9c42-53ec0ca5efa7" 00:17:48.668 ], 00:17:48.668 "product_name": "Malloc disk", 00:17:48.668 "block_size": 512, 00:17:48.668 "num_blocks": 65536, 00:17:48.668 "uuid": "da6e8c96-7082-42f4-9c42-53ec0ca5efa7", 00:17:48.668 "assigned_rate_limits": { 00:17:48.668 "rw_ios_per_sec": 0, 00:17:48.668 "rw_mbytes_per_sec": 0, 00:17:48.668 "r_mbytes_per_sec": 0, 00:17:48.668 "w_mbytes_per_sec": 0 00:17:48.668 }, 00:17:48.668 "claimed": true, 00:17:48.668 "claim_type": "exclusive_write", 00:17:48.668 "zoned": false, 00:17:48.668 "supported_io_types": { 00:17:48.668 "read": true, 00:17:48.668 "write": true, 00:17:48.668 "unmap": true, 00:17:48.668 "flush": true, 00:17:48.668 "reset": true, 00:17:48.668 "nvme_admin": false, 00:17:48.668 "nvme_io": false, 00:17:48.668 "nvme_io_md": false, 00:17:48.668 "write_zeroes": true, 00:17:48.668 "zcopy": true, 00:17:48.668 "get_zone_info": false, 00:17:48.668 "zone_management": false, 00:17:48.668 "zone_append": false, 00:17:48.668 "compare": false, 00:17:48.668 "compare_and_write": false, 00:17:48.668 "abort": true, 00:17:48.668 "seek_hole": false, 00:17:48.668 "seek_data": false, 00:17:48.668 "copy": true, 00:17:48.668 "nvme_iov_md": false 00:17:48.668 }, 00:17:48.668 "memory_domains": [ 00:17:48.668 { 00:17:48.668 "dma_device_id": "system", 00:17:48.668 "dma_device_type": 1 00:17:48.668 }, 00:17:48.668 { 00:17:48.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.668 "dma_device_type": 2 00:17:48.668 } 00:17:48.668 ], 00:17:48.668 "driver_specific": {} 00:17:48.668 } 00:17:48.668 ] 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.668 "name": "Existed_Raid", 00:17:48.668 "uuid": "25f1f0b8-8463-40e8-b3d3-c9c5b2b91d7a", 00:17:48.668 "strip_size_kb": 64, 00:17:48.668 "state": "configuring", 00:17:48.668 "raid_level": "raid5f", 00:17:48.668 "superblock": true, 00:17:48.668 "num_base_bdevs": 4, 00:17:48.668 "num_base_bdevs_discovered": 3, 00:17:48.668 "num_base_bdevs_operational": 4, 00:17:48.668 "base_bdevs_list": [ 00:17:48.668 { 00:17:48.668 "name": "BaseBdev1", 00:17:48.668 "uuid": "27a1ef01-d814-49f3-b0f8-bfab93be056e", 00:17:48.668 "is_configured": true, 00:17:48.668 "data_offset": 2048, 00:17:48.668 "data_size": 63488 00:17:48.668 }, 00:17:48.668 { 00:17:48.668 "name": "BaseBdev2", 00:17:48.668 "uuid": "31ffac7d-913a-4d9c-8fdd-00b90434cfc0", 00:17:48.668 "is_configured": true, 00:17:48.668 "data_offset": 2048, 00:17:48.668 "data_size": 63488 00:17:48.668 }, 00:17:48.668 { 00:17:48.668 "name": "BaseBdev3", 00:17:48.668 "uuid": "da6e8c96-7082-42f4-9c42-53ec0ca5efa7", 00:17:48.668 "is_configured": true, 00:17:48.668 "data_offset": 2048, 00:17:48.668 "data_size": 63488 00:17:48.668 }, 00:17:48.668 { 00:17:48.668 "name": "BaseBdev4", 00:17:48.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.668 "is_configured": false, 00:17:48.668 "data_offset": 0, 00:17:48.668 "data_size": 0 00:17:48.668 } 00:17:48.668 ] 00:17:48.668 }' 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.668 11:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.237 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:49.237 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.237 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.237 [2024-11-15 11:29:32.056226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:49.237 [2024-11-15 11:29:32.056634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:49.237 [2024-11-15 11:29:32.056685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:49.237 [2024-11-15 11:29:32.057044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:49.237 BaseBdev4 00:17:49.237 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.237 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:49.237 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:49.237 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:49.237 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.238 [2024-11-15 11:29:32.063892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:49.238 [2024-11-15 11:29:32.063942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:49.238 [2024-11-15 11:29:32.064314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.238 [ 00:17:49.238 { 00:17:49.238 "name": "BaseBdev4", 00:17:49.238 "aliases": [ 00:17:49.238 "09aa17d8-6499-48c9-9161-8e6839eb3b62" 00:17:49.238 ], 00:17:49.238 "product_name": "Malloc disk", 00:17:49.238 "block_size": 512, 00:17:49.238 "num_blocks": 65536, 00:17:49.238 "uuid": "09aa17d8-6499-48c9-9161-8e6839eb3b62", 00:17:49.238 "assigned_rate_limits": { 00:17:49.238 "rw_ios_per_sec": 0, 00:17:49.238 "rw_mbytes_per_sec": 0, 00:17:49.238 "r_mbytes_per_sec": 0, 00:17:49.238 "w_mbytes_per_sec": 0 00:17:49.238 }, 00:17:49.238 "claimed": true, 00:17:49.238 "claim_type": "exclusive_write", 00:17:49.238 "zoned": false, 00:17:49.238 "supported_io_types": { 00:17:49.238 "read": true, 00:17:49.238 "write": true, 00:17:49.238 "unmap": true, 00:17:49.238 "flush": true, 00:17:49.238 "reset": true, 00:17:49.238 "nvme_admin": false, 00:17:49.238 "nvme_io": false, 00:17:49.238 "nvme_io_md": false, 00:17:49.238 "write_zeroes": true, 00:17:49.238 "zcopy": true, 00:17:49.238 "get_zone_info": false, 00:17:49.238 "zone_management": false, 00:17:49.238 "zone_append": false, 00:17:49.238 "compare": false, 00:17:49.238 "compare_and_write": false, 00:17:49.238 "abort": true, 00:17:49.238 "seek_hole": false, 00:17:49.238 "seek_data": false, 00:17:49.238 "copy": true, 00:17:49.238 "nvme_iov_md": false 00:17:49.238 }, 00:17:49.238 "memory_domains": [ 00:17:49.238 { 00:17:49.238 "dma_device_id": "system", 00:17:49.238 "dma_device_type": 1 00:17:49.238 }, 00:17:49.238 { 00:17:49.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.238 "dma_device_type": 2 00:17:49.238 } 00:17:49.238 ], 00:17:49.238 "driver_specific": {} 00:17:49.238 } 00:17:49.238 ] 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.238 "name": "Existed_Raid", 00:17:49.238 "uuid": "25f1f0b8-8463-40e8-b3d3-c9c5b2b91d7a", 00:17:49.238 "strip_size_kb": 64, 00:17:49.238 "state": "online", 00:17:49.238 "raid_level": "raid5f", 00:17:49.238 "superblock": true, 00:17:49.238 "num_base_bdevs": 4, 00:17:49.238 "num_base_bdevs_discovered": 4, 00:17:49.238 "num_base_bdevs_operational": 4, 00:17:49.238 "base_bdevs_list": [ 00:17:49.238 { 00:17:49.238 "name": "BaseBdev1", 00:17:49.238 "uuid": "27a1ef01-d814-49f3-b0f8-bfab93be056e", 00:17:49.238 "is_configured": true, 00:17:49.238 "data_offset": 2048, 00:17:49.238 "data_size": 63488 00:17:49.238 }, 00:17:49.238 { 00:17:49.238 "name": "BaseBdev2", 00:17:49.238 "uuid": "31ffac7d-913a-4d9c-8fdd-00b90434cfc0", 00:17:49.238 "is_configured": true, 00:17:49.238 "data_offset": 2048, 00:17:49.238 "data_size": 63488 00:17:49.238 }, 00:17:49.238 { 00:17:49.238 "name": "BaseBdev3", 00:17:49.238 "uuid": "da6e8c96-7082-42f4-9c42-53ec0ca5efa7", 00:17:49.238 "is_configured": true, 00:17:49.238 "data_offset": 2048, 00:17:49.238 "data_size": 63488 00:17:49.238 }, 00:17:49.238 { 00:17:49.238 "name": "BaseBdev4", 00:17:49.238 "uuid": "09aa17d8-6499-48c9-9161-8e6839eb3b62", 00:17:49.238 "is_configured": true, 00:17:49.238 "data_offset": 2048, 00:17:49.238 "data_size": 63488 00:17:49.238 } 00:17:49.238 ] 00:17:49.238 }' 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.238 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.807 [2024-11-15 11:29:32.628640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.807 "name": "Existed_Raid", 00:17:49.807 "aliases": [ 00:17:49.807 "25f1f0b8-8463-40e8-b3d3-c9c5b2b91d7a" 00:17:49.807 ], 00:17:49.807 "product_name": "Raid Volume", 00:17:49.807 "block_size": 512, 00:17:49.807 "num_blocks": 190464, 00:17:49.807 "uuid": "25f1f0b8-8463-40e8-b3d3-c9c5b2b91d7a", 00:17:49.807 "assigned_rate_limits": { 00:17:49.807 "rw_ios_per_sec": 0, 00:17:49.807 "rw_mbytes_per_sec": 0, 00:17:49.807 "r_mbytes_per_sec": 0, 00:17:49.807 "w_mbytes_per_sec": 0 00:17:49.807 }, 00:17:49.807 "claimed": false, 00:17:49.807 "zoned": false, 00:17:49.807 "supported_io_types": { 00:17:49.807 "read": true, 00:17:49.807 "write": true, 00:17:49.807 "unmap": false, 00:17:49.807 "flush": false, 00:17:49.807 "reset": true, 00:17:49.807 "nvme_admin": false, 00:17:49.807 "nvme_io": false, 00:17:49.807 "nvme_io_md": false, 00:17:49.807 "write_zeroes": true, 00:17:49.807 "zcopy": false, 00:17:49.807 "get_zone_info": false, 00:17:49.807 "zone_management": false, 00:17:49.807 "zone_append": false, 00:17:49.807 "compare": false, 00:17:49.807 "compare_and_write": false, 00:17:49.807 "abort": false, 00:17:49.807 "seek_hole": false, 00:17:49.807 "seek_data": false, 00:17:49.807 "copy": false, 00:17:49.807 "nvme_iov_md": false 00:17:49.807 }, 00:17:49.807 "driver_specific": { 00:17:49.807 "raid": { 00:17:49.807 "uuid": "25f1f0b8-8463-40e8-b3d3-c9c5b2b91d7a", 00:17:49.807 "strip_size_kb": 64, 00:17:49.807 "state": "online", 00:17:49.807 "raid_level": "raid5f", 00:17:49.807 "superblock": true, 00:17:49.807 "num_base_bdevs": 4, 00:17:49.807 "num_base_bdevs_discovered": 4, 00:17:49.807 "num_base_bdevs_operational": 4, 00:17:49.807 "base_bdevs_list": [ 00:17:49.807 { 00:17:49.807 "name": "BaseBdev1", 00:17:49.807 "uuid": "27a1ef01-d814-49f3-b0f8-bfab93be056e", 00:17:49.807 "is_configured": true, 00:17:49.807 "data_offset": 2048, 00:17:49.807 "data_size": 63488 00:17:49.807 }, 00:17:49.807 { 00:17:49.807 "name": "BaseBdev2", 00:17:49.807 "uuid": "31ffac7d-913a-4d9c-8fdd-00b90434cfc0", 00:17:49.807 "is_configured": true, 00:17:49.807 "data_offset": 2048, 00:17:49.807 "data_size": 63488 00:17:49.807 }, 00:17:49.807 { 00:17:49.807 "name": "BaseBdev3", 00:17:49.807 "uuid": "da6e8c96-7082-42f4-9c42-53ec0ca5efa7", 00:17:49.807 "is_configured": true, 00:17:49.807 "data_offset": 2048, 00:17:49.807 "data_size": 63488 00:17:49.807 }, 00:17:49.807 { 00:17:49.807 "name": "BaseBdev4", 00:17:49.807 "uuid": "09aa17d8-6499-48c9-9161-8e6839eb3b62", 00:17:49.807 "is_configured": true, 00:17:49.807 "data_offset": 2048, 00:17:49.807 "data_size": 63488 00:17:49.807 } 00:17:49.807 ] 00:17:49.807 } 00:17:49.807 } 00:17:49.807 }' 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:49.807 BaseBdev2 00:17:49.807 BaseBdev3 00:17:49.807 BaseBdev4' 00:17:49.807 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.066 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.067 11:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.067 [2024-11-15 11:29:33.004684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.326 "name": "Existed_Raid", 00:17:50.326 "uuid": "25f1f0b8-8463-40e8-b3d3-c9c5b2b91d7a", 00:17:50.326 "strip_size_kb": 64, 00:17:50.326 "state": "online", 00:17:50.326 "raid_level": "raid5f", 00:17:50.326 "superblock": true, 00:17:50.326 "num_base_bdevs": 4, 00:17:50.326 "num_base_bdevs_discovered": 3, 00:17:50.326 "num_base_bdevs_operational": 3, 00:17:50.326 "base_bdevs_list": [ 00:17:50.326 { 00:17:50.326 "name": null, 00:17:50.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.326 "is_configured": false, 00:17:50.326 "data_offset": 0, 00:17:50.326 "data_size": 63488 00:17:50.326 }, 00:17:50.326 { 00:17:50.326 "name": "BaseBdev2", 00:17:50.326 "uuid": "31ffac7d-913a-4d9c-8fdd-00b90434cfc0", 00:17:50.326 "is_configured": true, 00:17:50.326 "data_offset": 2048, 00:17:50.326 "data_size": 63488 00:17:50.326 }, 00:17:50.326 { 00:17:50.326 "name": "BaseBdev3", 00:17:50.326 "uuid": "da6e8c96-7082-42f4-9c42-53ec0ca5efa7", 00:17:50.326 "is_configured": true, 00:17:50.326 "data_offset": 2048, 00:17:50.326 "data_size": 63488 00:17:50.326 }, 00:17:50.326 { 00:17:50.326 "name": "BaseBdev4", 00:17:50.326 "uuid": "09aa17d8-6499-48c9-9161-8e6839eb3b62", 00:17:50.326 "is_configured": true, 00:17:50.326 "data_offset": 2048, 00:17:50.326 "data_size": 63488 00:17:50.326 } 00:17:50.326 ] 00:17:50.326 }' 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.326 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.895 [2024-11-15 11:29:33.666077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.895 [2024-11-15 11:29:33.666415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.895 [2024-11-15 11:29:33.748404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.895 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.895 [2024-11-15 11:29:33.808477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.154 11:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.154 [2024-11-15 11:29:33.954894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:51.154 [2024-11-15 11:29:33.954975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.154 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.413 BaseBdev2 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.413 [ 00:17:51.413 { 00:17:51.413 "name": "BaseBdev2", 00:17:51.413 "aliases": [ 00:17:51.413 "03c90c71-fcff-4669-b43a-ce29359a3cd2" 00:17:51.413 ], 00:17:51.413 "product_name": "Malloc disk", 00:17:51.413 "block_size": 512, 00:17:51.413 "num_blocks": 65536, 00:17:51.413 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:51.413 "assigned_rate_limits": { 00:17:51.413 "rw_ios_per_sec": 0, 00:17:51.413 "rw_mbytes_per_sec": 0, 00:17:51.413 "r_mbytes_per_sec": 0, 00:17:51.413 "w_mbytes_per_sec": 0 00:17:51.413 }, 00:17:51.413 "claimed": false, 00:17:51.413 "zoned": false, 00:17:51.413 "supported_io_types": { 00:17:51.413 "read": true, 00:17:51.413 "write": true, 00:17:51.413 "unmap": true, 00:17:51.413 "flush": true, 00:17:51.413 "reset": true, 00:17:51.413 "nvme_admin": false, 00:17:51.413 "nvme_io": false, 00:17:51.413 "nvme_io_md": false, 00:17:51.413 "write_zeroes": true, 00:17:51.413 "zcopy": true, 00:17:51.413 "get_zone_info": false, 00:17:51.413 "zone_management": false, 00:17:51.413 "zone_append": false, 00:17:51.413 "compare": false, 00:17:51.413 "compare_and_write": false, 00:17:51.413 "abort": true, 00:17:51.413 "seek_hole": false, 00:17:51.413 "seek_data": false, 00:17:51.413 "copy": true, 00:17:51.413 "nvme_iov_md": false 00:17:51.413 }, 00:17:51.413 "memory_domains": [ 00:17:51.413 { 00:17:51.413 "dma_device_id": "system", 00:17:51.413 "dma_device_type": 1 00:17:51.413 }, 00:17:51.413 { 00:17:51.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.413 "dma_device_type": 2 00:17:51.413 } 00:17:51.413 ], 00:17:51.413 "driver_specific": {} 00:17:51.413 } 00:17:51.413 ] 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.413 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.414 BaseBdev3 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.414 [ 00:17:51.414 { 00:17:51.414 "name": "BaseBdev3", 00:17:51.414 "aliases": [ 00:17:51.414 "7b8e3394-f53b-4b07-9a22-539eb42d4113" 00:17:51.414 ], 00:17:51.414 "product_name": "Malloc disk", 00:17:51.414 "block_size": 512, 00:17:51.414 "num_blocks": 65536, 00:17:51.414 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:51.414 "assigned_rate_limits": { 00:17:51.414 "rw_ios_per_sec": 0, 00:17:51.414 "rw_mbytes_per_sec": 0, 00:17:51.414 "r_mbytes_per_sec": 0, 00:17:51.414 "w_mbytes_per_sec": 0 00:17:51.414 }, 00:17:51.414 "claimed": false, 00:17:51.414 "zoned": false, 00:17:51.414 "supported_io_types": { 00:17:51.414 "read": true, 00:17:51.414 "write": true, 00:17:51.414 "unmap": true, 00:17:51.414 "flush": true, 00:17:51.414 "reset": true, 00:17:51.414 "nvme_admin": false, 00:17:51.414 "nvme_io": false, 00:17:51.414 "nvme_io_md": false, 00:17:51.414 "write_zeroes": true, 00:17:51.414 "zcopy": true, 00:17:51.414 "get_zone_info": false, 00:17:51.414 "zone_management": false, 00:17:51.414 "zone_append": false, 00:17:51.414 "compare": false, 00:17:51.414 "compare_and_write": false, 00:17:51.414 "abort": true, 00:17:51.414 "seek_hole": false, 00:17:51.414 "seek_data": false, 00:17:51.414 "copy": true, 00:17:51.414 "nvme_iov_md": false 00:17:51.414 }, 00:17:51.414 "memory_domains": [ 00:17:51.414 { 00:17:51.414 "dma_device_id": "system", 00:17:51.414 "dma_device_type": 1 00:17:51.414 }, 00:17:51.414 { 00:17:51.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.414 "dma_device_type": 2 00:17:51.414 } 00:17:51.414 ], 00:17:51.414 "driver_specific": {} 00:17:51.414 } 00:17:51.414 ] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.414 BaseBdev4 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.414 [ 00:17:51.414 { 00:17:51.414 "name": "BaseBdev4", 00:17:51.414 "aliases": [ 00:17:51.414 "b546150a-9c8d-49f9-b460-73238d3d4ed1" 00:17:51.414 ], 00:17:51.414 "product_name": "Malloc disk", 00:17:51.414 "block_size": 512, 00:17:51.414 "num_blocks": 65536, 00:17:51.414 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:51.414 "assigned_rate_limits": { 00:17:51.414 "rw_ios_per_sec": 0, 00:17:51.414 "rw_mbytes_per_sec": 0, 00:17:51.414 "r_mbytes_per_sec": 0, 00:17:51.414 "w_mbytes_per_sec": 0 00:17:51.414 }, 00:17:51.414 "claimed": false, 00:17:51.414 "zoned": false, 00:17:51.414 "supported_io_types": { 00:17:51.414 "read": true, 00:17:51.414 "write": true, 00:17:51.414 "unmap": true, 00:17:51.414 "flush": true, 00:17:51.414 "reset": true, 00:17:51.414 "nvme_admin": false, 00:17:51.414 "nvme_io": false, 00:17:51.414 "nvme_io_md": false, 00:17:51.414 "write_zeroes": true, 00:17:51.414 "zcopy": true, 00:17:51.414 "get_zone_info": false, 00:17:51.414 "zone_management": false, 00:17:51.414 "zone_append": false, 00:17:51.414 "compare": false, 00:17:51.414 "compare_and_write": false, 00:17:51.414 "abort": true, 00:17:51.414 "seek_hole": false, 00:17:51.414 "seek_data": false, 00:17:51.414 "copy": true, 00:17:51.414 "nvme_iov_md": false 00:17:51.414 }, 00:17:51.414 "memory_domains": [ 00:17:51.414 { 00:17:51.414 "dma_device_id": "system", 00:17:51.414 "dma_device_type": 1 00:17:51.414 }, 00:17:51.414 { 00:17:51.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.414 "dma_device_type": 2 00:17:51.414 } 00:17:51.414 ], 00:17:51.414 "driver_specific": {} 00:17:51.414 } 00:17:51.414 ] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.414 [2024-11-15 11:29:34.315759] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.414 [2024-11-15 11:29:34.315814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.414 [2024-11-15 11:29:34.315844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.414 [2024-11-15 11:29:34.318450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.414 [2024-11-15 11:29:34.318583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.414 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.415 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.415 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.415 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.415 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.415 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.415 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.679 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.679 "name": "Existed_Raid", 00:17:51.679 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:51.679 "strip_size_kb": 64, 00:17:51.679 "state": "configuring", 00:17:51.679 "raid_level": "raid5f", 00:17:51.679 "superblock": true, 00:17:51.679 "num_base_bdevs": 4, 00:17:51.679 "num_base_bdevs_discovered": 3, 00:17:51.679 "num_base_bdevs_operational": 4, 00:17:51.679 "base_bdevs_list": [ 00:17:51.679 { 00:17:51.679 "name": "BaseBdev1", 00:17:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.679 "is_configured": false, 00:17:51.679 "data_offset": 0, 00:17:51.679 "data_size": 0 00:17:51.679 }, 00:17:51.679 { 00:17:51.679 "name": "BaseBdev2", 00:17:51.679 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:51.679 "is_configured": true, 00:17:51.679 "data_offset": 2048, 00:17:51.679 "data_size": 63488 00:17:51.679 }, 00:17:51.679 { 00:17:51.679 "name": "BaseBdev3", 00:17:51.679 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:51.679 "is_configured": true, 00:17:51.679 "data_offset": 2048, 00:17:51.679 "data_size": 63488 00:17:51.679 }, 00:17:51.679 { 00:17:51.679 "name": "BaseBdev4", 00:17:51.679 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:51.679 "is_configured": true, 00:17:51.679 "data_offset": 2048, 00:17:51.679 "data_size": 63488 00:17:51.679 } 00:17:51.679 ] 00:17:51.679 }' 00:17:51.679 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.679 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 [2024-11-15 11:29:34.839988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.210 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.210 "name": "Existed_Raid", 00:17:52.210 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:52.210 "strip_size_kb": 64, 00:17:52.210 "state": "configuring", 00:17:52.210 "raid_level": "raid5f", 00:17:52.210 "superblock": true, 00:17:52.210 "num_base_bdevs": 4, 00:17:52.210 "num_base_bdevs_discovered": 2, 00:17:52.210 "num_base_bdevs_operational": 4, 00:17:52.210 "base_bdevs_list": [ 00:17:52.210 { 00:17:52.210 "name": "BaseBdev1", 00:17:52.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.210 "is_configured": false, 00:17:52.210 "data_offset": 0, 00:17:52.210 "data_size": 0 00:17:52.210 }, 00:17:52.210 { 00:17:52.210 "name": null, 00:17:52.210 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:52.210 "is_configured": false, 00:17:52.210 "data_offset": 0, 00:17:52.210 "data_size": 63488 00:17:52.210 }, 00:17:52.210 { 00:17:52.210 "name": "BaseBdev3", 00:17:52.210 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:52.210 "is_configured": true, 00:17:52.210 "data_offset": 2048, 00:17:52.210 "data_size": 63488 00:17:52.210 }, 00:17:52.210 { 00:17:52.210 "name": "BaseBdev4", 00:17:52.210 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:52.210 "is_configured": true, 00:17:52.210 "data_offset": 2048, 00:17:52.210 "data_size": 63488 00:17:52.210 } 00:17:52.210 ] 00:17:52.210 }' 00:17:52.210 11:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.210 11:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.469 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.469 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.469 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.469 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.469 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.728 [2024-11-15 11:29:35.475634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.728 BaseBdev1 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.728 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.728 [ 00:17:52.728 { 00:17:52.728 "name": "BaseBdev1", 00:17:52.728 "aliases": [ 00:17:52.728 "2fd68373-665d-4c63-bb1f-0ebdb97f67d6" 00:17:52.728 ], 00:17:52.728 "product_name": "Malloc disk", 00:17:52.728 "block_size": 512, 00:17:52.728 "num_blocks": 65536, 00:17:52.729 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:52.729 "assigned_rate_limits": { 00:17:52.729 "rw_ios_per_sec": 0, 00:17:52.729 "rw_mbytes_per_sec": 0, 00:17:52.729 "r_mbytes_per_sec": 0, 00:17:52.729 "w_mbytes_per_sec": 0 00:17:52.729 }, 00:17:52.729 "claimed": true, 00:17:52.729 "claim_type": "exclusive_write", 00:17:52.729 "zoned": false, 00:17:52.729 "supported_io_types": { 00:17:52.729 "read": true, 00:17:52.729 "write": true, 00:17:52.729 "unmap": true, 00:17:52.729 "flush": true, 00:17:52.729 "reset": true, 00:17:52.729 "nvme_admin": false, 00:17:52.729 "nvme_io": false, 00:17:52.729 "nvme_io_md": false, 00:17:52.729 "write_zeroes": true, 00:17:52.729 "zcopy": true, 00:17:52.729 "get_zone_info": false, 00:17:52.729 "zone_management": false, 00:17:52.729 "zone_append": false, 00:17:52.729 "compare": false, 00:17:52.729 "compare_and_write": false, 00:17:52.729 "abort": true, 00:17:52.729 "seek_hole": false, 00:17:52.729 "seek_data": false, 00:17:52.729 "copy": true, 00:17:52.729 "nvme_iov_md": false 00:17:52.729 }, 00:17:52.729 "memory_domains": [ 00:17:52.729 { 00:17:52.729 "dma_device_id": "system", 00:17:52.729 "dma_device_type": 1 00:17:52.729 }, 00:17:52.729 { 00:17:52.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.729 "dma_device_type": 2 00:17:52.729 } 00:17:52.729 ], 00:17:52.729 "driver_specific": {} 00:17:52.729 } 00:17:52.729 ] 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.729 "name": "Existed_Raid", 00:17:52.729 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:52.729 "strip_size_kb": 64, 00:17:52.729 "state": "configuring", 00:17:52.729 "raid_level": "raid5f", 00:17:52.729 "superblock": true, 00:17:52.729 "num_base_bdevs": 4, 00:17:52.729 "num_base_bdevs_discovered": 3, 00:17:52.729 "num_base_bdevs_operational": 4, 00:17:52.729 "base_bdevs_list": [ 00:17:52.729 { 00:17:52.729 "name": "BaseBdev1", 00:17:52.729 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:52.729 "is_configured": true, 00:17:52.729 "data_offset": 2048, 00:17:52.729 "data_size": 63488 00:17:52.729 }, 00:17:52.729 { 00:17:52.729 "name": null, 00:17:52.729 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:52.729 "is_configured": false, 00:17:52.729 "data_offset": 0, 00:17:52.729 "data_size": 63488 00:17:52.729 }, 00:17:52.729 { 00:17:52.729 "name": "BaseBdev3", 00:17:52.729 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:52.729 "is_configured": true, 00:17:52.729 "data_offset": 2048, 00:17:52.729 "data_size": 63488 00:17:52.729 }, 00:17:52.729 { 00:17:52.729 "name": "BaseBdev4", 00:17:52.729 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:52.729 "is_configured": true, 00:17:52.729 "data_offset": 2048, 00:17:52.729 "data_size": 63488 00:17:52.729 } 00:17:52.729 ] 00:17:52.729 }' 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.729 11:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.298 [2024-11-15 11:29:36.080026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.298 "name": "Existed_Raid", 00:17:53.298 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:53.298 "strip_size_kb": 64, 00:17:53.298 "state": "configuring", 00:17:53.298 "raid_level": "raid5f", 00:17:53.298 "superblock": true, 00:17:53.298 "num_base_bdevs": 4, 00:17:53.298 "num_base_bdevs_discovered": 2, 00:17:53.298 "num_base_bdevs_operational": 4, 00:17:53.298 "base_bdevs_list": [ 00:17:53.298 { 00:17:53.298 "name": "BaseBdev1", 00:17:53.298 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:53.298 "is_configured": true, 00:17:53.298 "data_offset": 2048, 00:17:53.298 "data_size": 63488 00:17:53.298 }, 00:17:53.298 { 00:17:53.298 "name": null, 00:17:53.298 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:53.298 "is_configured": false, 00:17:53.298 "data_offset": 0, 00:17:53.298 "data_size": 63488 00:17:53.298 }, 00:17:53.298 { 00:17:53.298 "name": null, 00:17:53.298 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:53.298 "is_configured": false, 00:17:53.298 "data_offset": 0, 00:17:53.298 "data_size": 63488 00:17:53.298 }, 00:17:53.298 { 00:17:53.298 "name": "BaseBdev4", 00:17:53.298 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:53.298 "is_configured": true, 00:17:53.298 "data_offset": 2048, 00:17:53.298 "data_size": 63488 00:17:53.298 } 00:17:53.298 ] 00:17:53.298 }' 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.298 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.867 [2024-11-15 11:29:36.680265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.867 "name": "Existed_Raid", 00:17:53.867 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:53.867 "strip_size_kb": 64, 00:17:53.867 "state": "configuring", 00:17:53.867 "raid_level": "raid5f", 00:17:53.867 "superblock": true, 00:17:53.867 "num_base_bdevs": 4, 00:17:53.867 "num_base_bdevs_discovered": 3, 00:17:53.867 "num_base_bdevs_operational": 4, 00:17:53.867 "base_bdevs_list": [ 00:17:53.867 { 00:17:53.867 "name": "BaseBdev1", 00:17:53.867 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:53.867 "is_configured": true, 00:17:53.867 "data_offset": 2048, 00:17:53.867 "data_size": 63488 00:17:53.867 }, 00:17:53.867 { 00:17:53.867 "name": null, 00:17:53.867 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:53.867 "is_configured": false, 00:17:53.867 "data_offset": 0, 00:17:53.867 "data_size": 63488 00:17:53.867 }, 00:17:53.867 { 00:17:53.867 "name": "BaseBdev3", 00:17:53.867 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:53.867 "is_configured": true, 00:17:53.867 "data_offset": 2048, 00:17:53.867 "data_size": 63488 00:17:53.867 }, 00:17:53.867 { 00:17:53.867 "name": "BaseBdev4", 00:17:53.867 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:53.867 "is_configured": true, 00:17:53.867 "data_offset": 2048, 00:17:53.867 "data_size": 63488 00:17:53.867 } 00:17:53.867 ] 00:17:53.867 }' 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.867 11:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.435 [2024-11-15 11:29:37.284479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.435 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.436 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.694 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.694 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.694 "name": "Existed_Raid", 00:17:54.694 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:54.694 "strip_size_kb": 64, 00:17:54.694 "state": "configuring", 00:17:54.694 "raid_level": "raid5f", 00:17:54.694 "superblock": true, 00:17:54.694 "num_base_bdevs": 4, 00:17:54.694 "num_base_bdevs_discovered": 2, 00:17:54.694 "num_base_bdevs_operational": 4, 00:17:54.694 "base_bdevs_list": [ 00:17:54.694 { 00:17:54.694 "name": null, 00:17:54.694 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:54.694 "is_configured": false, 00:17:54.694 "data_offset": 0, 00:17:54.694 "data_size": 63488 00:17:54.694 }, 00:17:54.694 { 00:17:54.694 "name": null, 00:17:54.694 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:54.694 "is_configured": false, 00:17:54.694 "data_offset": 0, 00:17:54.694 "data_size": 63488 00:17:54.694 }, 00:17:54.694 { 00:17:54.694 "name": "BaseBdev3", 00:17:54.694 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:54.694 "is_configured": true, 00:17:54.694 "data_offset": 2048, 00:17:54.694 "data_size": 63488 00:17:54.694 }, 00:17:54.694 { 00:17:54.694 "name": "BaseBdev4", 00:17:54.694 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:54.694 "is_configured": true, 00:17:54.694 "data_offset": 2048, 00:17:54.694 "data_size": 63488 00:17:54.694 } 00:17:54.694 ] 00:17:54.694 }' 00:17:54.694 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.694 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.264 [2024-11-15 11:29:37.966690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.264 11:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.264 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.264 "name": "Existed_Raid", 00:17:55.264 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:55.264 "strip_size_kb": 64, 00:17:55.264 "state": "configuring", 00:17:55.264 "raid_level": "raid5f", 00:17:55.264 "superblock": true, 00:17:55.264 "num_base_bdevs": 4, 00:17:55.264 "num_base_bdevs_discovered": 3, 00:17:55.264 "num_base_bdevs_operational": 4, 00:17:55.264 "base_bdevs_list": [ 00:17:55.264 { 00:17:55.264 "name": null, 00:17:55.264 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:55.264 "is_configured": false, 00:17:55.264 "data_offset": 0, 00:17:55.264 "data_size": 63488 00:17:55.264 }, 00:17:55.264 { 00:17:55.264 "name": "BaseBdev2", 00:17:55.264 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:55.264 "is_configured": true, 00:17:55.264 "data_offset": 2048, 00:17:55.264 "data_size": 63488 00:17:55.264 }, 00:17:55.264 { 00:17:55.264 "name": "BaseBdev3", 00:17:55.264 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:55.264 "is_configured": true, 00:17:55.264 "data_offset": 2048, 00:17:55.264 "data_size": 63488 00:17:55.264 }, 00:17:55.264 { 00:17:55.264 "name": "BaseBdev4", 00:17:55.264 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:55.264 "is_configured": true, 00:17:55.264 "data_offset": 2048, 00:17:55.264 "data_size": 63488 00:17:55.264 } 00:17:55.264 ] 00:17:55.264 }' 00:17:55.264 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.265 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2fd68373-665d-4c63-bb1f-0ebdb97f67d6 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.834 [2024-11-15 11:29:38.639831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:55.834 [2024-11-15 11:29:38.640182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:55.834 [2024-11-15 11:29:38.640216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:55.834 [2024-11-15 11:29:38.640556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:55.834 NewBaseBdev 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.834 [2024-11-15 11:29:38.647296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:55.834 [2024-11-15 11:29:38.647331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:55.834 [2024-11-15 11:29:38.647632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.834 [ 00:17:55.834 { 00:17:55.834 "name": "NewBaseBdev", 00:17:55.834 "aliases": [ 00:17:55.834 "2fd68373-665d-4c63-bb1f-0ebdb97f67d6" 00:17:55.834 ], 00:17:55.834 "product_name": "Malloc disk", 00:17:55.834 "block_size": 512, 00:17:55.834 "num_blocks": 65536, 00:17:55.834 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:55.834 "assigned_rate_limits": { 00:17:55.834 "rw_ios_per_sec": 0, 00:17:55.834 "rw_mbytes_per_sec": 0, 00:17:55.834 "r_mbytes_per_sec": 0, 00:17:55.834 "w_mbytes_per_sec": 0 00:17:55.834 }, 00:17:55.834 "claimed": true, 00:17:55.834 "claim_type": "exclusive_write", 00:17:55.834 "zoned": false, 00:17:55.834 "supported_io_types": { 00:17:55.834 "read": true, 00:17:55.834 "write": true, 00:17:55.834 "unmap": true, 00:17:55.834 "flush": true, 00:17:55.834 "reset": true, 00:17:55.834 "nvme_admin": false, 00:17:55.834 "nvme_io": false, 00:17:55.834 "nvme_io_md": false, 00:17:55.834 "write_zeroes": true, 00:17:55.834 "zcopy": true, 00:17:55.834 "get_zone_info": false, 00:17:55.834 "zone_management": false, 00:17:55.834 "zone_append": false, 00:17:55.834 "compare": false, 00:17:55.834 "compare_and_write": false, 00:17:55.834 "abort": true, 00:17:55.834 "seek_hole": false, 00:17:55.834 "seek_data": false, 00:17:55.834 "copy": true, 00:17:55.834 "nvme_iov_md": false 00:17:55.834 }, 00:17:55.834 "memory_domains": [ 00:17:55.834 { 00:17:55.834 "dma_device_id": "system", 00:17:55.834 "dma_device_type": 1 00:17:55.834 }, 00:17:55.834 { 00:17:55.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.834 "dma_device_type": 2 00:17:55.834 } 00:17:55.834 ], 00:17:55.834 "driver_specific": {} 00:17:55.834 } 00:17:55.834 ] 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.834 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.835 "name": "Existed_Raid", 00:17:55.835 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:55.835 "strip_size_kb": 64, 00:17:55.835 "state": "online", 00:17:55.835 "raid_level": "raid5f", 00:17:55.835 "superblock": true, 00:17:55.835 "num_base_bdevs": 4, 00:17:55.835 "num_base_bdevs_discovered": 4, 00:17:55.835 "num_base_bdevs_operational": 4, 00:17:55.835 "base_bdevs_list": [ 00:17:55.835 { 00:17:55.835 "name": "NewBaseBdev", 00:17:55.835 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:55.835 "is_configured": true, 00:17:55.835 "data_offset": 2048, 00:17:55.835 "data_size": 63488 00:17:55.835 }, 00:17:55.835 { 00:17:55.835 "name": "BaseBdev2", 00:17:55.835 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:55.835 "is_configured": true, 00:17:55.835 "data_offset": 2048, 00:17:55.835 "data_size": 63488 00:17:55.835 }, 00:17:55.835 { 00:17:55.835 "name": "BaseBdev3", 00:17:55.835 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:55.835 "is_configured": true, 00:17:55.835 "data_offset": 2048, 00:17:55.835 "data_size": 63488 00:17:55.835 }, 00:17:55.835 { 00:17:55.835 "name": "BaseBdev4", 00:17:55.835 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:55.835 "is_configured": true, 00:17:55.835 "data_offset": 2048, 00:17:55.835 "data_size": 63488 00:17:55.835 } 00:17:55.835 ] 00:17:55.835 }' 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.835 11:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.404 [2024-11-15 11:29:39.191968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:56.404 "name": "Existed_Raid", 00:17:56.404 "aliases": [ 00:17:56.404 "646d29fe-b3ea-466a-b05a-86558c593f33" 00:17:56.404 ], 00:17:56.404 "product_name": "Raid Volume", 00:17:56.404 "block_size": 512, 00:17:56.404 "num_blocks": 190464, 00:17:56.404 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:56.404 "assigned_rate_limits": { 00:17:56.404 "rw_ios_per_sec": 0, 00:17:56.404 "rw_mbytes_per_sec": 0, 00:17:56.404 "r_mbytes_per_sec": 0, 00:17:56.404 "w_mbytes_per_sec": 0 00:17:56.404 }, 00:17:56.404 "claimed": false, 00:17:56.404 "zoned": false, 00:17:56.404 "supported_io_types": { 00:17:56.404 "read": true, 00:17:56.404 "write": true, 00:17:56.404 "unmap": false, 00:17:56.404 "flush": false, 00:17:56.404 "reset": true, 00:17:56.404 "nvme_admin": false, 00:17:56.404 "nvme_io": false, 00:17:56.404 "nvme_io_md": false, 00:17:56.404 "write_zeroes": true, 00:17:56.404 "zcopy": false, 00:17:56.404 "get_zone_info": false, 00:17:56.404 "zone_management": false, 00:17:56.404 "zone_append": false, 00:17:56.404 "compare": false, 00:17:56.404 "compare_and_write": false, 00:17:56.404 "abort": false, 00:17:56.404 "seek_hole": false, 00:17:56.404 "seek_data": false, 00:17:56.404 "copy": false, 00:17:56.404 "nvme_iov_md": false 00:17:56.404 }, 00:17:56.404 "driver_specific": { 00:17:56.404 "raid": { 00:17:56.404 "uuid": "646d29fe-b3ea-466a-b05a-86558c593f33", 00:17:56.404 "strip_size_kb": 64, 00:17:56.404 "state": "online", 00:17:56.404 "raid_level": "raid5f", 00:17:56.404 "superblock": true, 00:17:56.404 "num_base_bdevs": 4, 00:17:56.404 "num_base_bdevs_discovered": 4, 00:17:56.404 "num_base_bdevs_operational": 4, 00:17:56.404 "base_bdevs_list": [ 00:17:56.404 { 00:17:56.404 "name": "NewBaseBdev", 00:17:56.404 "uuid": "2fd68373-665d-4c63-bb1f-0ebdb97f67d6", 00:17:56.404 "is_configured": true, 00:17:56.404 "data_offset": 2048, 00:17:56.404 "data_size": 63488 00:17:56.404 }, 00:17:56.404 { 00:17:56.404 "name": "BaseBdev2", 00:17:56.404 "uuid": "03c90c71-fcff-4669-b43a-ce29359a3cd2", 00:17:56.404 "is_configured": true, 00:17:56.404 "data_offset": 2048, 00:17:56.404 "data_size": 63488 00:17:56.404 }, 00:17:56.404 { 00:17:56.404 "name": "BaseBdev3", 00:17:56.404 "uuid": "7b8e3394-f53b-4b07-9a22-539eb42d4113", 00:17:56.404 "is_configured": true, 00:17:56.404 "data_offset": 2048, 00:17:56.404 "data_size": 63488 00:17:56.404 }, 00:17:56.404 { 00:17:56.404 "name": "BaseBdev4", 00:17:56.404 "uuid": "b546150a-9c8d-49f9-b460-73238d3d4ed1", 00:17:56.404 "is_configured": true, 00:17:56.404 "data_offset": 2048, 00:17:56.404 "data_size": 63488 00:17:56.404 } 00:17:56.404 ] 00:17:56.404 } 00:17:56.404 } 00:17:56.404 }' 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:56.404 BaseBdev2 00:17:56.404 BaseBdev3 00:17:56.404 BaseBdev4' 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.404 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.664 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.664 [2024-11-15 11:29:39.563738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.664 [2024-11-15 11:29:39.563794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.665 [2024-11-15 11:29:39.563883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.665 [2024-11-15 11:29:39.564360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.665 [2024-11-15 11:29:39.564388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83712 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83712 ']' 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83712 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83712 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:56.665 killing process with pid 83712 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83712' 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83712 00:17:56.665 [2024-11-15 11:29:39.603325] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.665 11:29:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83712 00:17:57.257 [2024-11-15 11:29:39.932638] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.194 11:29:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:58.194 00:17:58.194 real 0m12.998s 00:17:58.194 user 0m21.474s 00:17:58.194 sys 0m2.024s 00:17:58.194 11:29:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:58.194 11:29:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.194 ************************************ 00:17:58.194 END TEST raid5f_state_function_test_sb 00:17:58.194 ************************************ 00:17:58.194 11:29:41 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:58.194 11:29:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:58.194 11:29:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:58.194 11:29:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.194 ************************************ 00:17:58.194 START TEST raid5f_superblock_test 00:17:58.194 ************************************ 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84390 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84390 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84390 ']' 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:58.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:58.194 11:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.453 [2024-11-15 11:29:41.166383] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:17:58.454 [2024-11-15 11:29:41.166606] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84390 ] 00:17:58.454 [2024-11-15 11:29:41.358628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.713 [2024-11-15 11:29:41.533851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.972 [2024-11-15 11:29:41.742061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.972 [2024-11-15 11:29:41.742096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.231 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.490 malloc1 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.490 [2024-11-15 11:29:42.207084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.490 [2024-11-15 11:29:42.207353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.490 [2024-11-15 11:29:42.207413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:59.490 [2024-11-15 11:29:42.207430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.490 [2024-11-15 11:29:42.210783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.490 pt1 00:17:59.490 [2024-11-15 11:29:42.210975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.490 malloc2 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.490 [2024-11-15 11:29:42.263526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.490 [2024-11-15 11:29:42.263733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.490 [2024-11-15 11:29:42.263822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:59.490 [2024-11-15 11:29:42.263971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.490 [2024-11-15 11:29:42.266779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.490 [2024-11-15 11:29:42.266976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.490 pt2 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.490 malloc3 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.490 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.491 [2024-11-15 11:29:42.331408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:59.491 [2024-11-15 11:29:42.331609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.491 [2024-11-15 11:29:42.331701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:59.491 [2024-11-15 11:29:42.331855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.491 [2024-11-15 11:29:42.334904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.491 pt3 00:17:59.491 [2024-11-15 11:29:42.335089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.491 malloc4 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.491 [2024-11-15 11:29:42.382645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:59.491 [2024-11-15 11:29:42.382882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.491 [2024-11-15 11:29:42.382952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:59.491 [2024-11-15 11:29:42.383050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.491 [2024-11-15 11:29:42.386242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.491 pt4 00:17:59.491 [2024-11-15 11:29:42.386401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.491 [2024-11-15 11:29:42.390784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.491 [2024-11-15 11:29:42.393400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.491 [2024-11-15 11:29:42.393601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:59.491 [2024-11-15 11:29:42.393720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:59.491 [2024-11-15 11:29:42.394063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:59.491 [2024-11-15 11:29:42.394224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:59.491 [2024-11-15 11:29:42.394629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:59.491 [2024-11-15 11:29:42.401108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:59.491 [2024-11-15 11:29:42.401297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:59.491 [2024-11-15 11:29:42.401723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.491 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.750 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.750 "name": "raid_bdev1", 00:17:59.750 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:17:59.750 "strip_size_kb": 64, 00:17:59.750 "state": "online", 00:17:59.750 "raid_level": "raid5f", 00:17:59.750 "superblock": true, 00:17:59.750 "num_base_bdevs": 4, 00:17:59.750 "num_base_bdevs_discovered": 4, 00:17:59.750 "num_base_bdevs_operational": 4, 00:17:59.750 "base_bdevs_list": [ 00:17:59.750 { 00:17:59.750 "name": "pt1", 00:17:59.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.750 "is_configured": true, 00:17:59.750 "data_offset": 2048, 00:17:59.750 "data_size": 63488 00:17:59.750 }, 00:17:59.750 { 00:17:59.750 "name": "pt2", 00:17:59.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.750 "is_configured": true, 00:17:59.750 "data_offset": 2048, 00:17:59.750 "data_size": 63488 00:17:59.750 }, 00:17:59.750 { 00:17:59.750 "name": "pt3", 00:17:59.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.750 "is_configured": true, 00:17:59.750 "data_offset": 2048, 00:17:59.750 "data_size": 63488 00:17:59.750 }, 00:17:59.750 { 00:17:59.750 "name": "pt4", 00:17:59.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.750 "is_configured": true, 00:17:59.750 "data_offset": 2048, 00:17:59.750 "data_size": 63488 00:17:59.750 } 00:17:59.750 ] 00:17:59.750 }' 00:17:59.750 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.750 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:00.008 [2024-11-15 11:29:42.926087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.008 11:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.267 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:00.267 "name": "raid_bdev1", 00:18:00.267 "aliases": [ 00:18:00.267 "0d4c5e31-8d60-407f-9c30-f7f327da9523" 00:18:00.267 ], 00:18:00.267 "product_name": "Raid Volume", 00:18:00.267 "block_size": 512, 00:18:00.267 "num_blocks": 190464, 00:18:00.267 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:00.267 "assigned_rate_limits": { 00:18:00.267 "rw_ios_per_sec": 0, 00:18:00.267 "rw_mbytes_per_sec": 0, 00:18:00.267 "r_mbytes_per_sec": 0, 00:18:00.267 "w_mbytes_per_sec": 0 00:18:00.267 }, 00:18:00.267 "claimed": false, 00:18:00.267 "zoned": false, 00:18:00.267 "supported_io_types": { 00:18:00.267 "read": true, 00:18:00.267 "write": true, 00:18:00.267 "unmap": false, 00:18:00.267 "flush": false, 00:18:00.267 "reset": true, 00:18:00.267 "nvme_admin": false, 00:18:00.267 "nvme_io": false, 00:18:00.267 "nvme_io_md": false, 00:18:00.267 "write_zeroes": true, 00:18:00.267 "zcopy": false, 00:18:00.267 "get_zone_info": false, 00:18:00.267 "zone_management": false, 00:18:00.267 "zone_append": false, 00:18:00.267 "compare": false, 00:18:00.267 "compare_and_write": false, 00:18:00.267 "abort": false, 00:18:00.267 "seek_hole": false, 00:18:00.267 "seek_data": false, 00:18:00.267 "copy": false, 00:18:00.267 "nvme_iov_md": false 00:18:00.267 }, 00:18:00.267 "driver_specific": { 00:18:00.267 "raid": { 00:18:00.267 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:00.267 "strip_size_kb": 64, 00:18:00.267 "state": "online", 00:18:00.267 "raid_level": "raid5f", 00:18:00.267 "superblock": true, 00:18:00.267 "num_base_bdevs": 4, 00:18:00.267 "num_base_bdevs_discovered": 4, 00:18:00.267 "num_base_bdevs_operational": 4, 00:18:00.267 "base_bdevs_list": [ 00:18:00.267 { 00:18:00.267 "name": "pt1", 00:18:00.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.267 "is_configured": true, 00:18:00.267 "data_offset": 2048, 00:18:00.267 "data_size": 63488 00:18:00.267 }, 00:18:00.267 { 00:18:00.267 "name": "pt2", 00:18:00.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.267 "is_configured": true, 00:18:00.267 "data_offset": 2048, 00:18:00.267 "data_size": 63488 00:18:00.267 }, 00:18:00.267 { 00:18:00.267 "name": "pt3", 00:18:00.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.267 "is_configured": true, 00:18:00.267 "data_offset": 2048, 00:18:00.267 "data_size": 63488 00:18:00.267 }, 00:18:00.267 { 00:18:00.267 "name": "pt4", 00:18:00.267 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.267 "is_configured": true, 00:18:00.267 "data_offset": 2048, 00:18:00.267 "data_size": 63488 00:18:00.267 } 00:18:00.267 ] 00:18:00.267 } 00:18:00.267 } 00:18:00.267 }' 00:18:00.268 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.268 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:00.268 pt2 00:18:00.268 pt3 00:18:00.268 pt4' 00:18:00.268 11:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.268 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 [2024-11-15 11:29:43.278168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0d4c5e31-8d60-407f-9c30-f7f327da9523 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0d4c5e31-8d60-407f-9c30-f7f327da9523 ']' 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 [2024-11-15 11:29:43.329992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.528 [2024-11-15 11:29:43.330018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.528 [2024-11-15 11:29:43.330097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.528 [2024-11-15 11:29:43.330279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.528 [2024-11-15 11:29:43.330310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.788 [2024-11-15 11:29:43.494085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:00.788 [2024-11-15 11:29:43.496812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:00.788 [2024-11-15 11:29:43.496873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:00.788 [2024-11-15 11:29:43.496925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:00.788 [2024-11-15 11:29:43.497004] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:00.788 [2024-11-15 11:29:43.497086] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:00.788 [2024-11-15 11:29:43.497115] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:00.788 [2024-11-15 11:29:43.497142] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:00.788 [2024-11-15 11:29:43.497161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.788 [2024-11-15 11:29:43.497190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:00.788 request: 00:18:00.788 { 00:18:00.788 "name": "raid_bdev1", 00:18:00.788 "raid_level": "raid5f", 00:18:00.788 "base_bdevs": [ 00:18:00.788 "malloc1", 00:18:00.788 "malloc2", 00:18:00.788 "malloc3", 00:18:00.788 "malloc4" 00:18:00.788 ], 00:18:00.788 "strip_size_kb": 64, 00:18:00.788 "superblock": false, 00:18:00.788 "method": "bdev_raid_create", 00:18:00.788 "req_id": 1 00:18:00.788 } 00:18:00.788 Got JSON-RPC error response 00:18:00.788 response: 00:18:00.788 { 00:18:00.788 "code": -17, 00:18:00.788 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:00.788 } 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.788 [2024-11-15 11:29:43.562211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.788 [2024-11-15 11:29:43.562446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.788 [2024-11-15 11:29:43.562525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:00.788 [2024-11-15 11:29:43.562766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.788 [2024-11-15 11:29:43.566534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.788 [2024-11-15 11:29:43.566643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.788 [2024-11-15 11:29:43.566799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.788 [2024-11-15 11:29:43.566879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.788 pt1 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.788 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.788 "name": "raid_bdev1", 00:18:00.788 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:00.788 "strip_size_kb": 64, 00:18:00.788 "state": "configuring", 00:18:00.788 "raid_level": "raid5f", 00:18:00.788 "superblock": true, 00:18:00.788 "num_base_bdevs": 4, 00:18:00.788 "num_base_bdevs_discovered": 1, 00:18:00.788 "num_base_bdevs_operational": 4, 00:18:00.788 "base_bdevs_list": [ 00:18:00.788 { 00:18:00.788 "name": "pt1", 00:18:00.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.788 "is_configured": true, 00:18:00.788 "data_offset": 2048, 00:18:00.788 "data_size": 63488 00:18:00.788 }, 00:18:00.788 { 00:18:00.788 "name": null, 00:18:00.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.788 "is_configured": false, 00:18:00.788 "data_offset": 2048, 00:18:00.788 "data_size": 63488 00:18:00.788 }, 00:18:00.788 { 00:18:00.788 "name": null, 00:18:00.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.788 "is_configured": false, 00:18:00.789 "data_offset": 2048, 00:18:00.789 "data_size": 63488 00:18:00.789 }, 00:18:00.789 { 00:18:00.789 "name": null, 00:18:00.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.789 "is_configured": false, 00:18:00.789 "data_offset": 2048, 00:18:00.789 "data_size": 63488 00:18:00.789 } 00:18:00.789 ] 00:18:00.789 }' 00:18:00.789 11:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.789 11:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.357 [2024-11-15 11:29:44.082987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.357 [2024-11-15 11:29:44.083271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.357 [2024-11-15 11:29:44.083343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:01.357 [2024-11-15 11:29:44.083369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.357 [2024-11-15 11:29:44.084006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.357 [2024-11-15 11:29:44.084034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.357 [2024-11-15 11:29:44.084132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.357 [2024-11-15 11:29:44.084191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.357 pt2 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.357 [2024-11-15 11:29:44.091019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.357 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.357 "name": "raid_bdev1", 00:18:01.357 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:01.358 "strip_size_kb": 64, 00:18:01.358 "state": "configuring", 00:18:01.358 "raid_level": "raid5f", 00:18:01.358 "superblock": true, 00:18:01.358 "num_base_bdevs": 4, 00:18:01.358 "num_base_bdevs_discovered": 1, 00:18:01.358 "num_base_bdevs_operational": 4, 00:18:01.358 "base_bdevs_list": [ 00:18:01.358 { 00:18:01.358 "name": "pt1", 00:18:01.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.358 "is_configured": true, 00:18:01.358 "data_offset": 2048, 00:18:01.358 "data_size": 63488 00:18:01.358 }, 00:18:01.358 { 00:18:01.358 "name": null, 00:18:01.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.358 "is_configured": false, 00:18:01.358 "data_offset": 0, 00:18:01.358 "data_size": 63488 00:18:01.358 }, 00:18:01.358 { 00:18:01.358 "name": null, 00:18:01.358 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.358 "is_configured": false, 00:18:01.358 "data_offset": 2048, 00:18:01.358 "data_size": 63488 00:18:01.358 }, 00:18:01.358 { 00:18:01.358 "name": null, 00:18:01.358 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.358 "is_configured": false, 00:18:01.358 "data_offset": 2048, 00:18:01.358 "data_size": 63488 00:18:01.358 } 00:18:01.358 ] 00:18:01.358 }' 00:18:01.358 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.358 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.925 [2024-11-15 11:29:44.631119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.925 [2024-11-15 11:29:44.631398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.925 [2024-11-15 11:29:44.631474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:01.925 [2024-11-15 11:29:44.631494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.925 [2024-11-15 11:29:44.632128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.925 [2024-11-15 11:29:44.632167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.925 [2024-11-15 11:29:44.632346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.925 [2024-11-15 11:29:44.632380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.925 pt2 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.925 [2024-11-15 11:29:44.643124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.925 [2024-11-15 11:29:44.643225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.925 [2024-11-15 11:29:44.643285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:01.925 [2024-11-15 11:29:44.643301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.925 [2024-11-15 11:29:44.643825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.925 [2024-11-15 11:29:44.643857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.925 [2024-11-15 11:29:44.643936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:01.925 [2024-11-15 11:29:44.643970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.925 pt3 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.925 [2024-11-15 11:29:44.651085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:01.925 [2024-11-15 11:29:44.651329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.925 [2024-11-15 11:29:44.651398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:01.925 [2024-11-15 11:29:44.651504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.925 [2024-11-15 11:29:44.652126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.925 [2024-11-15 11:29:44.652304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:01.925 [2024-11-15 11:29:44.652511] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:01.925 [2024-11-15 11:29:44.652660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:01.925 [2024-11-15 11:29:44.652894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:01.925 [2024-11-15 11:29:44.653002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:01.925 [2024-11-15 11:29:44.653384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:01.925 [2024-11-15 11:29:44.659555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:01.925 pt4 00:18:01.925 [2024-11-15 11:29:44.659752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:01.925 [2024-11-15 11:29:44.660009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.925 "name": "raid_bdev1", 00:18:01.925 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:01.925 "strip_size_kb": 64, 00:18:01.925 "state": "online", 00:18:01.925 "raid_level": "raid5f", 00:18:01.925 "superblock": true, 00:18:01.925 "num_base_bdevs": 4, 00:18:01.925 "num_base_bdevs_discovered": 4, 00:18:01.925 "num_base_bdevs_operational": 4, 00:18:01.925 "base_bdevs_list": [ 00:18:01.925 { 00:18:01.925 "name": "pt1", 00:18:01.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.925 "is_configured": true, 00:18:01.925 "data_offset": 2048, 00:18:01.925 "data_size": 63488 00:18:01.925 }, 00:18:01.925 { 00:18:01.925 "name": "pt2", 00:18:01.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.925 "is_configured": true, 00:18:01.925 "data_offset": 2048, 00:18:01.925 "data_size": 63488 00:18:01.925 }, 00:18:01.925 { 00:18:01.925 "name": "pt3", 00:18:01.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.925 "is_configured": true, 00:18:01.925 "data_offset": 2048, 00:18:01.925 "data_size": 63488 00:18:01.925 }, 00:18:01.925 { 00:18:01.925 "name": "pt4", 00:18:01.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.925 "is_configured": true, 00:18:01.925 "data_offset": 2048, 00:18:01.925 "data_size": 63488 00:18:01.925 } 00:18:01.925 ] 00:18:01.925 }' 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.925 11:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.493 [2024-11-15 11:29:45.208887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.493 "name": "raid_bdev1", 00:18:02.493 "aliases": [ 00:18:02.493 "0d4c5e31-8d60-407f-9c30-f7f327da9523" 00:18:02.493 ], 00:18:02.493 "product_name": "Raid Volume", 00:18:02.493 "block_size": 512, 00:18:02.493 "num_blocks": 190464, 00:18:02.493 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:02.493 "assigned_rate_limits": { 00:18:02.493 "rw_ios_per_sec": 0, 00:18:02.493 "rw_mbytes_per_sec": 0, 00:18:02.493 "r_mbytes_per_sec": 0, 00:18:02.493 "w_mbytes_per_sec": 0 00:18:02.493 }, 00:18:02.493 "claimed": false, 00:18:02.493 "zoned": false, 00:18:02.493 "supported_io_types": { 00:18:02.493 "read": true, 00:18:02.493 "write": true, 00:18:02.493 "unmap": false, 00:18:02.493 "flush": false, 00:18:02.493 "reset": true, 00:18:02.493 "nvme_admin": false, 00:18:02.493 "nvme_io": false, 00:18:02.493 "nvme_io_md": false, 00:18:02.493 "write_zeroes": true, 00:18:02.493 "zcopy": false, 00:18:02.493 "get_zone_info": false, 00:18:02.493 "zone_management": false, 00:18:02.493 "zone_append": false, 00:18:02.493 "compare": false, 00:18:02.493 "compare_and_write": false, 00:18:02.493 "abort": false, 00:18:02.493 "seek_hole": false, 00:18:02.493 "seek_data": false, 00:18:02.493 "copy": false, 00:18:02.493 "nvme_iov_md": false 00:18:02.493 }, 00:18:02.493 "driver_specific": { 00:18:02.493 "raid": { 00:18:02.493 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:02.493 "strip_size_kb": 64, 00:18:02.493 "state": "online", 00:18:02.493 "raid_level": "raid5f", 00:18:02.493 "superblock": true, 00:18:02.493 "num_base_bdevs": 4, 00:18:02.493 "num_base_bdevs_discovered": 4, 00:18:02.493 "num_base_bdevs_operational": 4, 00:18:02.493 "base_bdevs_list": [ 00:18:02.493 { 00:18:02.493 "name": "pt1", 00:18:02.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.493 "is_configured": true, 00:18:02.493 "data_offset": 2048, 00:18:02.493 "data_size": 63488 00:18:02.493 }, 00:18:02.493 { 00:18:02.493 "name": "pt2", 00:18:02.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.493 "is_configured": true, 00:18:02.493 "data_offset": 2048, 00:18:02.493 "data_size": 63488 00:18:02.493 }, 00:18:02.493 { 00:18:02.493 "name": "pt3", 00:18:02.493 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.493 "is_configured": true, 00:18:02.493 "data_offset": 2048, 00:18:02.493 "data_size": 63488 00:18:02.493 }, 00:18:02.493 { 00:18:02.493 "name": "pt4", 00:18:02.493 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.493 "is_configured": true, 00:18:02.493 "data_offset": 2048, 00:18:02.493 "data_size": 63488 00:18:02.493 } 00:18:02.493 ] 00:18:02.493 } 00:18:02.493 } 00:18:02.493 }' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:02.493 pt2 00:18:02.493 pt3 00:18:02.493 pt4' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.493 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.752 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.753 [2024-11-15 11:29:45.600804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0d4c5e31-8d60-407f-9c30-f7f327da9523 '!=' 0d4c5e31-8d60-407f-9c30-f7f327da9523 ']' 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.753 [2024-11-15 11:29:45.652708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.753 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.011 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.011 "name": "raid_bdev1", 00:18:03.011 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:03.011 "strip_size_kb": 64, 00:18:03.011 "state": "online", 00:18:03.011 "raid_level": "raid5f", 00:18:03.011 "superblock": true, 00:18:03.011 "num_base_bdevs": 4, 00:18:03.011 "num_base_bdevs_discovered": 3, 00:18:03.011 "num_base_bdevs_operational": 3, 00:18:03.011 "base_bdevs_list": [ 00:18:03.011 { 00:18:03.011 "name": null, 00:18:03.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.011 "is_configured": false, 00:18:03.011 "data_offset": 0, 00:18:03.011 "data_size": 63488 00:18:03.011 }, 00:18:03.011 { 00:18:03.011 "name": "pt2", 00:18:03.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.011 "is_configured": true, 00:18:03.011 "data_offset": 2048, 00:18:03.011 "data_size": 63488 00:18:03.011 }, 00:18:03.011 { 00:18:03.011 "name": "pt3", 00:18:03.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.011 "is_configured": true, 00:18:03.011 "data_offset": 2048, 00:18:03.011 "data_size": 63488 00:18:03.011 }, 00:18:03.011 { 00:18:03.011 "name": "pt4", 00:18:03.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.011 "is_configured": true, 00:18:03.011 "data_offset": 2048, 00:18:03.011 "data_size": 63488 00:18:03.011 } 00:18:03.011 ] 00:18:03.011 }' 00:18:03.011 11:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.011 11:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.270 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.270 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.270 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.270 [2024-11-15 11:29:46.200858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.270 [2024-11-15 11:29:46.200897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.270 [2024-11-15 11:29:46.201004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.270 [2024-11-15 11:29:46.201110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.270 [2024-11-15 11:29:46.201131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:03.270 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.270 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:03.270 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.270 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.270 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.528 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.529 [2024-11-15 11:29:46.284849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.529 [2024-11-15 11:29:46.285034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.529 [2024-11-15 11:29:46.285104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:03.529 [2024-11-15 11:29:46.285303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.529 [2024-11-15 11:29:46.288497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.529 [2024-11-15 11:29:46.288552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.529 [2024-11-15 11:29:46.288648] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.529 [2024-11-15 11:29:46.288703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.529 pt2 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.529 "name": "raid_bdev1", 00:18:03.529 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:03.529 "strip_size_kb": 64, 00:18:03.529 "state": "configuring", 00:18:03.529 "raid_level": "raid5f", 00:18:03.529 "superblock": true, 00:18:03.529 "num_base_bdevs": 4, 00:18:03.529 "num_base_bdevs_discovered": 1, 00:18:03.529 "num_base_bdevs_operational": 3, 00:18:03.529 "base_bdevs_list": [ 00:18:03.529 { 00:18:03.529 "name": null, 00:18:03.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.529 "is_configured": false, 00:18:03.529 "data_offset": 2048, 00:18:03.529 "data_size": 63488 00:18:03.529 }, 00:18:03.529 { 00:18:03.529 "name": "pt2", 00:18:03.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.529 "is_configured": true, 00:18:03.529 "data_offset": 2048, 00:18:03.529 "data_size": 63488 00:18:03.529 }, 00:18:03.529 { 00:18:03.529 "name": null, 00:18:03.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.529 "is_configured": false, 00:18:03.529 "data_offset": 2048, 00:18:03.529 "data_size": 63488 00:18:03.529 }, 00:18:03.529 { 00:18:03.529 "name": null, 00:18:03.529 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.529 "is_configured": false, 00:18:03.529 "data_offset": 2048, 00:18:03.529 "data_size": 63488 00:18:03.529 } 00:18:03.529 ] 00:18:03.529 }' 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.529 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.094 [2024-11-15 11:29:46.817090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:04.094 [2024-11-15 11:29:46.817380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.094 [2024-11-15 11:29:46.817543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:04.094 [2024-11-15 11:29:46.817590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.094 [2024-11-15 11:29:46.818351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.094 [2024-11-15 11:29:46.818410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:04.094 [2024-11-15 11:29:46.818676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:04.094 [2024-11-15 11:29:46.818838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:04.094 pt3 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.094 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.094 "name": "raid_bdev1", 00:18:04.094 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:04.094 "strip_size_kb": 64, 00:18:04.094 "state": "configuring", 00:18:04.094 "raid_level": "raid5f", 00:18:04.094 "superblock": true, 00:18:04.094 "num_base_bdevs": 4, 00:18:04.094 "num_base_bdevs_discovered": 2, 00:18:04.094 "num_base_bdevs_operational": 3, 00:18:04.094 "base_bdevs_list": [ 00:18:04.094 { 00:18:04.094 "name": null, 00:18:04.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.094 "is_configured": false, 00:18:04.094 "data_offset": 2048, 00:18:04.094 "data_size": 63488 00:18:04.094 }, 00:18:04.094 { 00:18:04.095 "name": "pt2", 00:18:04.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.095 "is_configured": true, 00:18:04.095 "data_offset": 2048, 00:18:04.095 "data_size": 63488 00:18:04.095 }, 00:18:04.095 { 00:18:04.095 "name": "pt3", 00:18:04.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.095 "is_configured": true, 00:18:04.095 "data_offset": 2048, 00:18:04.095 "data_size": 63488 00:18:04.095 }, 00:18:04.095 { 00:18:04.095 "name": null, 00:18:04.095 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.095 "is_configured": false, 00:18:04.095 "data_offset": 2048, 00:18:04.095 "data_size": 63488 00:18:04.095 } 00:18:04.095 ] 00:18:04.095 }' 00:18:04.095 11:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.095 11:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.661 [2024-11-15 11:29:47.333242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:04.661 [2024-11-15 11:29:47.333520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.661 [2024-11-15 11:29:47.333612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:04.661 [2024-11-15 11:29:47.333632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.661 [2024-11-15 11:29:47.334369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.661 [2024-11-15 11:29:47.334396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:04.661 [2024-11-15 11:29:47.334531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:04.661 [2024-11-15 11:29:47.334593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:04.661 [2024-11-15 11:29:47.334790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:04.661 [2024-11-15 11:29:47.334805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:04.661 [2024-11-15 11:29:47.335088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:04.661 pt4 00:18:04.661 [2024-11-15 11:29:47.341626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:04.661 [2024-11-15 11:29:47.341654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:04.661 [2024-11-15 11:29:47.341992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.661 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.661 "name": "raid_bdev1", 00:18:04.661 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:04.661 "strip_size_kb": 64, 00:18:04.661 "state": "online", 00:18:04.661 "raid_level": "raid5f", 00:18:04.661 "superblock": true, 00:18:04.661 "num_base_bdevs": 4, 00:18:04.661 "num_base_bdevs_discovered": 3, 00:18:04.661 "num_base_bdevs_operational": 3, 00:18:04.661 "base_bdevs_list": [ 00:18:04.661 { 00:18:04.661 "name": null, 00:18:04.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.661 "is_configured": false, 00:18:04.661 "data_offset": 2048, 00:18:04.661 "data_size": 63488 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "name": "pt2", 00:18:04.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.661 "is_configured": true, 00:18:04.661 "data_offset": 2048, 00:18:04.661 "data_size": 63488 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "name": "pt3", 00:18:04.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.661 "is_configured": true, 00:18:04.661 "data_offset": 2048, 00:18:04.661 "data_size": 63488 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "name": "pt4", 00:18:04.661 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.661 "is_configured": true, 00:18:04.661 "data_offset": 2048, 00:18:04.661 "data_size": 63488 00:18:04.661 } 00:18:04.661 ] 00:18:04.662 }' 00:18:04.662 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.662 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.920 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.920 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.920 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.920 [2024-11-15 11:29:47.866488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.920 [2024-11-15 11:29:47.866528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.920 [2024-11-15 11:29:47.866640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.920 [2024-11-15 11:29:47.866767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.920 [2024-11-15 11:29:47.866817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.180 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.180 [2024-11-15 11:29:47.938451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:05.180 [2024-11-15 11:29:47.938722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.180 [2024-11-15 11:29:47.938764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:05.180 [2024-11-15 11:29:47.938782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.180 [2024-11-15 11:29:47.941742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.180 [2024-11-15 11:29:47.941934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:05.180 [2024-11-15 11:29:47.942044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:05.180 [2024-11-15 11:29:47.942105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:05.180 [2024-11-15 11:29:47.942321] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:05.180 [2024-11-15 11:29:47.942346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.181 [2024-11-15 11:29:47.942366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:05.181 [2024-11-15 11:29:47.942454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.181 pt1 00:18:05.181 [2024-11-15 11:29:47.942679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.181 "name": "raid_bdev1", 00:18:05.181 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:05.181 "strip_size_kb": 64, 00:18:05.181 "state": "configuring", 00:18:05.181 "raid_level": "raid5f", 00:18:05.181 "superblock": true, 00:18:05.181 "num_base_bdevs": 4, 00:18:05.181 "num_base_bdevs_discovered": 2, 00:18:05.181 "num_base_bdevs_operational": 3, 00:18:05.181 "base_bdevs_list": [ 00:18:05.181 { 00:18:05.181 "name": null, 00:18:05.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.181 "is_configured": false, 00:18:05.181 "data_offset": 2048, 00:18:05.181 "data_size": 63488 00:18:05.181 }, 00:18:05.181 { 00:18:05.181 "name": "pt2", 00:18:05.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.181 "is_configured": true, 00:18:05.181 "data_offset": 2048, 00:18:05.181 "data_size": 63488 00:18:05.181 }, 00:18:05.181 { 00:18:05.181 "name": "pt3", 00:18:05.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.181 "is_configured": true, 00:18:05.181 "data_offset": 2048, 00:18:05.181 "data_size": 63488 00:18:05.181 }, 00:18:05.181 { 00:18:05.181 "name": null, 00:18:05.181 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:05.181 "is_configured": false, 00:18:05.181 "data_offset": 2048, 00:18:05.181 "data_size": 63488 00:18:05.181 } 00:18:05.181 ] 00:18:05.181 }' 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.181 11:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.817 [2024-11-15 11:29:48.514877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:05.817 [2024-11-15 11:29:48.515112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.817 [2024-11-15 11:29:48.515158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:05.817 [2024-11-15 11:29:48.515247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.817 [2024-11-15 11:29:48.515919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.817 [2024-11-15 11:29:48.515941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:05.817 [2024-11-15 11:29:48.516042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:05.817 [2024-11-15 11:29:48.516073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:05.817 [2024-11-15 11:29:48.516275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:05.817 [2024-11-15 11:29:48.516475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:05.817 [2024-11-15 11:29:48.516942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:05.817 [2024-11-15 11:29:48.523399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:05.817 [2024-11-15 11:29:48.523598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:05.817 [2024-11-15 11:29:48.524051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.817 pt4 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.817 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.817 "name": "raid_bdev1", 00:18:05.817 "uuid": "0d4c5e31-8d60-407f-9c30-f7f327da9523", 00:18:05.817 "strip_size_kb": 64, 00:18:05.817 "state": "online", 00:18:05.817 "raid_level": "raid5f", 00:18:05.817 "superblock": true, 00:18:05.817 "num_base_bdevs": 4, 00:18:05.817 "num_base_bdevs_discovered": 3, 00:18:05.817 "num_base_bdevs_operational": 3, 00:18:05.817 "base_bdevs_list": [ 00:18:05.817 { 00:18:05.817 "name": null, 00:18:05.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.817 "is_configured": false, 00:18:05.817 "data_offset": 2048, 00:18:05.817 "data_size": 63488 00:18:05.817 }, 00:18:05.817 { 00:18:05.817 "name": "pt2", 00:18:05.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.817 "is_configured": true, 00:18:05.817 "data_offset": 2048, 00:18:05.817 "data_size": 63488 00:18:05.817 }, 00:18:05.817 { 00:18:05.817 "name": "pt3", 00:18:05.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.817 "is_configured": true, 00:18:05.817 "data_offset": 2048, 00:18:05.817 "data_size": 63488 00:18:05.817 }, 00:18:05.817 { 00:18:05.817 "name": "pt4", 00:18:05.817 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:05.818 "is_configured": true, 00:18:05.818 "data_offset": 2048, 00:18:05.818 "data_size": 63488 00:18:05.818 } 00:18:05.818 ] 00:18:05.818 }' 00:18:05.818 11:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.818 11:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.077 11:29:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:06.077 11:29:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:06.077 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.077 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.337 [2024-11-15 11:29:49.084615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0d4c5e31-8d60-407f-9c30-f7f327da9523 '!=' 0d4c5e31-8d60-407f-9c30-f7f327da9523 ']' 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84390 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84390 ']' 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84390 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84390 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84390' 00:18:06.337 killing process with pid 84390 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84390 00:18:06.337 [2024-11-15 11:29:49.166375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.337 11:29:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84390 00:18:06.337 [2024-11-15 11:29:49.166521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.337 [2024-11-15 11:29:49.166633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.337 [2024-11-15 11:29:49.166654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:06.906 [2024-11-15 11:29:49.563009] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.844 11:29:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:07.844 00:18:07.844 real 0m9.539s 00:18:07.844 user 0m15.625s 00:18:07.844 sys 0m1.423s 00:18:07.844 11:29:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:07.844 11:29:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.844 ************************************ 00:18:07.844 END TEST raid5f_superblock_test 00:18:07.844 ************************************ 00:18:07.844 11:29:50 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:07.844 11:29:50 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:07.844 11:29:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:07.844 11:29:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:07.844 11:29:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.844 ************************************ 00:18:07.844 START TEST raid5f_rebuild_test 00:18:07.844 ************************************ 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:07.844 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84881 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84881 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 84881 ']' 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:07.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:07.845 11:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.845 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:07.845 Zero copy mechanism will not be used. 00:18:07.845 [2024-11-15 11:29:50.780005] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:18:07.845 [2024-11-15 11:29:50.780244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84881 ] 00:18:08.111 [2024-11-15 11:29:50.967401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.378 [2024-11-15 11:29:51.104116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.378 [2024-11-15 11:29:51.301084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.378 [2024-11-15 11:29:51.301150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.945 BaseBdev1_malloc 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.945 [2024-11-15 11:29:51.783055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:08.945 [2024-11-15 11:29:51.783317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.945 [2024-11-15 11:29:51.783394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:08.945 [2024-11-15 11:29:51.783671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.945 [2024-11-15 11:29:51.786623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.945 [2024-11-15 11:29:51.786815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:08.945 BaseBdev1 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.945 BaseBdev2_malloc 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.945 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.945 [2024-11-15 11:29:51.838334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:08.945 [2024-11-15 11:29:51.838603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.945 [2024-11-15 11:29:51.838662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:08.945 [2024-11-15 11:29:51.838682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.946 [2024-11-15 11:29:51.841649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.946 [2024-11-15 11:29:51.841842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:08.946 BaseBdev2 00:18:08.946 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.946 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:08.946 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:08.946 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.946 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 BaseBdev3_malloc 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 [2024-11-15 11:29:51.904871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:09.205 [2024-11-15 11:29:51.905118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.205 [2024-11-15 11:29:51.905243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:09.205 [2024-11-15 11:29:51.905527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.205 [2024-11-15 11:29:51.908862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.205 [2024-11-15 11:29:51.908924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:09.205 BaseBdev3 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 BaseBdev4_malloc 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 [2024-11-15 11:29:51.956813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:09.205 [2024-11-15 11:29:51.957079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.205 [2024-11-15 11:29:51.957155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:09.205 [2024-11-15 11:29:51.957373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.205 [2024-11-15 11:29:51.960558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.205 [2024-11-15 11:29:51.960758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:09.205 BaseBdev4 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.205 11:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 spare_malloc 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 spare_delay 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 [2024-11-15 11:29:52.026098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.205 [2024-11-15 11:29:52.026367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.205 [2024-11-15 11:29:52.026445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:09.205 [2024-11-15 11:29:52.026586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.205 [2024-11-15 11:29:52.029599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.205 [2024-11-15 11:29:52.029777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.205 spare 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.205 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.206 [2024-11-15 11:29:52.034293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.206 [2024-11-15 11:29:52.036932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.206 [2024-11-15 11:29:52.037014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:09.206 [2024-11-15 11:29:52.037090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:09.206 [2024-11-15 11:29:52.037275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:09.206 [2024-11-15 11:29:52.037296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:09.206 [2024-11-15 11:29:52.037693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:09.206 [2024-11-15 11:29:52.044603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:09.206 [2024-11-15 11:29:52.044647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:09.206 [2024-11-15 11:29:52.044943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.206 "name": "raid_bdev1", 00:18:09.206 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:09.206 "strip_size_kb": 64, 00:18:09.206 "state": "online", 00:18:09.206 "raid_level": "raid5f", 00:18:09.206 "superblock": false, 00:18:09.206 "num_base_bdevs": 4, 00:18:09.206 "num_base_bdevs_discovered": 4, 00:18:09.206 "num_base_bdevs_operational": 4, 00:18:09.206 "base_bdevs_list": [ 00:18:09.206 { 00:18:09.206 "name": "BaseBdev1", 00:18:09.206 "uuid": "83144316-d81e-55dd-9b36-597129b13345", 00:18:09.206 "is_configured": true, 00:18:09.206 "data_offset": 0, 00:18:09.206 "data_size": 65536 00:18:09.206 }, 00:18:09.206 { 00:18:09.206 "name": "BaseBdev2", 00:18:09.206 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:09.206 "is_configured": true, 00:18:09.206 "data_offset": 0, 00:18:09.206 "data_size": 65536 00:18:09.206 }, 00:18:09.206 { 00:18:09.206 "name": "BaseBdev3", 00:18:09.206 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:09.206 "is_configured": true, 00:18:09.206 "data_offset": 0, 00:18:09.206 "data_size": 65536 00:18:09.206 }, 00:18:09.206 { 00:18:09.206 "name": "BaseBdev4", 00:18:09.206 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:09.206 "is_configured": true, 00:18:09.206 "data_offset": 0, 00:18:09.206 "data_size": 65536 00:18:09.206 } 00:18:09.206 ] 00:18:09.206 }' 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.206 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.774 [2024-11-15 11:29:52.573563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:09.774 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:10.033 [2024-11-15 11:29:52.897355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:10.033 /dev/nbd0 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:10.033 1+0 records in 00:18:10.033 1+0 records out 00:18:10.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037126 s, 11.0 MB/s 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:10.033 11:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:10.970 512+0 records in 00:18:10.970 512+0 records out 00:18:10.970 100663296 bytes (101 MB, 96 MiB) copied, 0.620676 s, 162 MB/s 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:10.970 [2024-11-15 11:29:53.832947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.970 [2024-11-15 11:29:53.845818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.970 "name": "raid_bdev1", 00:18:10.970 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:10.970 "strip_size_kb": 64, 00:18:10.970 "state": "online", 00:18:10.970 "raid_level": "raid5f", 00:18:10.970 "superblock": false, 00:18:10.970 "num_base_bdevs": 4, 00:18:10.970 "num_base_bdevs_discovered": 3, 00:18:10.970 "num_base_bdevs_operational": 3, 00:18:10.970 "base_bdevs_list": [ 00:18:10.970 { 00:18:10.970 "name": null, 00:18:10.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.970 "is_configured": false, 00:18:10.970 "data_offset": 0, 00:18:10.970 "data_size": 65536 00:18:10.970 }, 00:18:10.970 { 00:18:10.970 "name": "BaseBdev2", 00:18:10.970 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:10.970 "is_configured": true, 00:18:10.970 "data_offset": 0, 00:18:10.970 "data_size": 65536 00:18:10.970 }, 00:18:10.970 { 00:18:10.970 "name": "BaseBdev3", 00:18:10.970 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:10.970 "is_configured": true, 00:18:10.970 "data_offset": 0, 00:18:10.970 "data_size": 65536 00:18:10.970 }, 00:18:10.970 { 00:18:10.970 "name": "BaseBdev4", 00:18:10.970 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:10.970 "is_configured": true, 00:18:10.970 "data_offset": 0, 00:18:10.970 "data_size": 65536 00:18:10.970 } 00:18:10.970 ] 00:18:10.970 }' 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.970 11:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.538 11:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.538 11:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.538 11:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.538 [2024-11-15 11:29:54.370029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.538 [2024-11-15 11:29:54.384737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:11.538 11:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.538 11:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:11.538 [2024-11-15 11:29:54.394278] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.474 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.474 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.474 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.475 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.475 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.475 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.475 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.475 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.475 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.475 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.763 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.763 "name": "raid_bdev1", 00:18:12.763 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:12.763 "strip_size_kb": 64, 00:18:12.763 "state": "online", 00:18:12.763 "raid_level": "raid5f", 00:18:12.763 "superblock": false, 00:18:12.763 "num_base_bdevs": 4, 00:18:12.763 "num_base_bdevs_discovered": 4, 00:18:12.763 "num_base_bdevs_operational": 4, 00:18:12.763 "process": { 00:18:12.763 "type": "rebuild", 00:18:12.763 "target": "spare", 00:18:12.763 "progress": { 00:18:12.763 "blocks": 17280, 00:18:12.763 "percent": 8 00:18:12.763 } 00:18:12.763 }, 00:18:12.763 "base_bdevs_list": [ 00:18:12.763 { 00:18:12.763 "name": "spare", 00:18:12.763 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:12.763 "is_configured": true, 00:18:12.763 "data_offset": 0, 00:18:12.764 "data_size": 65536 00:18:12.764 }, 00:18:12.764 { 00:18:12.764 "name": "BaseBdev2", 00:18:12.764 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:12.764 "is_configured": true, 00:18:12.764 "data_offset": 0, 00:18:12.764 "data_size": 65536 00:18:12.764 }, 00:18:12.764 { 00:18:12.764 "name": "BaseBdev3", 00:18:12.764 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:12.764 "is_configured": true, 00:18:12.764 "data_offset": 0, 00:18:12.764 "data_size": 65536 00:18:12.764 }, 00:18:12.764 { 00:18:12.764 "name": "BaseBdev4", 00:18:12.764 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:12.764 "is_configured": true, 00:18:12.764 "data_offset": 0, 00:18:12.764 "data_size": 65536 00:18:12.764 } 00:18:12.764 ] 00:18:12.764 }' 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.764 [2024-11-15 11:29:55.560085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.764 [2024-11-15 11:29:55.608025] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:12.764 [2024-11-15 11:29:55.608306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.764 [2024-11-15 11:29:55.608337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.764 [2024-11-15 11:29:55.608354] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.764 "name": "raid_bdev1", 00:18:12.764 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:12.764 "strip_size_kb": 64, 00:18:12.764 "state": "online", 00:18:12.764 "raid_level": "raid5f", 00:18:12.764 "superblock": false, 00:18:12.764 "num_base_bdevs": 4, 00:18:12.764 "num_base_bdevs_discovered": 3, 00:18:12.764 "num_base_bdevs_operational": 3, 00:18:12.764 "base_bdevs_list": [ 00:18:12.764 { 00:18:12.764 "name": null, 00:18:12.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.764 "is_configured": false, 00:18:12.764 "data_offset": 0, 00:18:12.764 "data_size": 65536 00:18:12.764 }, 00:18:12.764 { 00:18:12.764 "name": "BaseBdev2", 00:18:12.764 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:12.764 "is_configured": true, 00:18:12.764 "data_offset": 0, 00:18:12.764 "data_size": 65536 00:18:12.764 }, 00:18:12.764 { 00:18:12.764 "name": "BaseBdev3", 00:18:12.764 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:12.764 "is_configured": true, 00:18:12.764 "data_offset": 0, 00:18:12.764 "data_size": 65536 00:18:12.764 }, 00:18:12.764 { 00:18:12.764 "name": "BaseBdev4", 00:18:12.764 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:12.764 "is_configured": true, 00:18:12.764 "data_offset": 0, 00:18:12.764 "data_size": 65536 00:18:12.764 } 00:18:12.764 ] 00:18:12.764 }' 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.764 11:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.343 "name": "raid_bdev1", 00:18:13.343 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:13.343 "strip_size_kb": 64, 00:18:13.343 "state": "online", 00:18:13.343 "raid_level": "raid5f", 00:18:13.343 "superblock": false, 00:18:13.343 "num_base_bdevs": 4, 00:18:13.343 "num_base_bdevs_discovered": 3, 00:18:13.343 "num_base_bdevs_operational": 3, 00:18:13.343 "base_bdevs_list": [ 00:18:13.343 { 00:18:13.343 "name": null, 00:18:13.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.343 "is_configured": false, 00:18:13.343 "data_offset": 0, 00:18:13.343 "data_size": 65536 00:18:13.343 }, 00:18:13.343 { 00:18:13.343 "name": "BaseBdev2", 00:18:13.343 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:13.343 "is_configured": true, 00:18:13.343 "data_offset": 0, 00:18:13.343 "data_size": 65536 00:18:13.343 }, 00:18:13.343 { 00:18:13.343 "name": "BaseBdev3", 00:18:13.343 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:13.343 "is_configured": true, 00:18:13.343 "data_offset": 0, 00:18:13.343 "data_size": 65536 00:18:13.343 }, 00:18:13.343 { 00:18:13.343 "name": "BaseBdev4", 00:18:13.343 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:13.343 "is_configured": true, 00:18:13.343 "data_offset": 0, 00:18:13.343 "data_size": 65536 00:18:13.343 } 00:18:13.343 ] 00:18:13.343 }' 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.343 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.602 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.602 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:13.602 11:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.602 11:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 [2024-11-15 11:29:56.337013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.602 [2024-11-15 11:29:56.351103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:13.602 11:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.602 11:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:13.602 [2024-11-15 11:29:56.359680] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.538 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.538 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.538 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.538 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.538 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.538 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.538 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.538 11:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.539 11:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 11:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.539 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.539 "name": "raid_bdev1", 00:18:14.539 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:14.539 "strip_size_kb": 64, 00:18:14.539 "state": "online", 00:18:14.539 "raid_level": "raid5f", 00:18:14.539 "superblock": false, 00:18:14.539 "num_base_bdevs": 4, 00:18:14.539 "num_base_bdevs_discovered": 4, 00:18:14.539 "num_base_bdevs_operational": 4, 00:18:14.539 "process": { 00:18:14.539 "type": "rebuild", 00:18:14.539 "target": "spare", 00:18:14.539 "progress": { 00:18:14.539 "blocks": 17280, 00:18:14.539 "percent": 8 00:18:14.539 } 00:18:14.539 }, 00:18:14.539 "base_bdevs_list": [ 00:18:14.539 { 00:18:14.539 "name": "spare", 00:18:14.539 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:14.539 "is_configured": true, 00:18:14.539 "data_offset": 0, 00:18:14.539 "data_size": 65536 00:18:14.539 }, 00:18:14.539 { 00:18:14.539 "name": "BaseBdev2", 00:18:14.539 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:14.539 "is_configured": true, 00:18:14.539 "data_offset": 0, 00:18:14.539 "data_size": 65536 00:18:14.539 }, 00:18:14.539 { 00:18:14.539 "name": "BaseBdev3", 00:18:14.539 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:14.539 "is_configured": true, 00:18:14.539 "data_offset": 0, 00:18:14.539 "data_size": 65536 00:18:14.539 }, 00:18:14.539 { 00:18:14.539 "name": "BaseBdev4", 00:18:14.539 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:14.539 "is_configured": true, 00:18:14.539 "data_offset": 0, 00:18:14.539 "data_size": 65536 00:18:14.539 } 00:18:14.539 ] 00:18:14.539 }' 00:18:14.539 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.539 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.539 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.797 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.797 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:14.797 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=674 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.798 "name": "raid_bdev1", 00:18:14.798 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:14.798 "strip_size_kb": 64, 00:18:14.798 "state": "online", 00:18:14.798 "raid_level": "raid5f", 00:18:14.798 "superblock": false, 00:18:14.798 "num_base_bdevs": 4, 00:18:14.798 "num_base_bdevs_discovered": 4, 00:18:14.798 "num_base_bdevs_operational": 4, 00:18:14.798 "process": { 00:18:14.798 "type": "rebuild", 00:18:14.798 "target": "spare", 00:18:14.798 "progress": { 00:18:14.798 "blocks": 21120, 00:18:14.798 "percent": 10 00:18:14.798 } 00:18:14.798 }, 00:18:14.798 "base_bdevs_list": [ 00:18:14.798 { 00:18:14.798 "name": "spare", 00:18:14.798 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:14.798 "is_configured": true, 00:18:14.798 "data_offset": 0, 00:18:14.798 "data_size": 65536 00:18:14.798 }, 00:18:14.798 { 00:18:14.798 "name": "BaseBdev2", 00:18:14.798 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:14.798 "is_configured": true, 00:18:14.798 "data_offset": 0, 00:18:14.798 "data_size": 65536 00:18:14.798 }, 00:18:14.798 { 00:18:14.798 "name": "BaseBdev3", 00:18:14.798 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:14.798 "is_configured": true, 00:18:14.798 "data_offset": 0, 00:18:14.798 "data_size": 65536 00:18:14.798 }, 00:18:14.798 { 00:18:14.798 "name": "BaseBdev4", 00:18:14.798 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:14.798 "is_configured": true, 00:18:14.798 "data_offset": 0, 00:18:14.798 "data_size": 65536 00:18:14.798 } 00:18:14.798 ] 00:18:14.798 }' 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.798 11:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.735 11:29:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.994 11:29:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.994 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.994 "name": "raid_bdev1", 00:18:15.994 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:15.994 "strip_size_kb": 64, 00:18:15.994 "state": "online", 00:18:15.994 "raid_level": "raid5f", 00:18:15.994 "superblock": false, 00:18:15.994 "num_base_bdevs": 4, 00:18:15.994 "num_base_bdevs_discovered": 4, 00:18:15.994 "num_base_bdevs_operational": 4, 00:18:15.994 "process": { 00:18:15.994 "type": "rebuild", 00:18:15.994 "target": "spare", 00:18:15.994 "progress": { 00:18:15.994 "blocks": 42240, 00:18:15.994 "percent": 21 00:18:15.994 } 00:18:15.994 }, 00:18:15.994 "base_bdevs_list": [ 00:18:15.994 { 00:18:15.994 "name": "spare", 00:18:15.994 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:15.994 "is_configured": true, 00:18:15.994 "data_offset": 0, 00:18:15.994 "data_size": 65536 00:18:15.994 }, 00:18:15.994 { 00:18:15.994 "name": "BaseBdev2", 00:18:15.994 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:15.994 "is_configured": true, 00:18:15.994 "data_offset": 0, 00:18:15.994 "data_size": 65536 00:18:15.994 }, 00:18:15.994 { 00:18:15.994 "name": "BaseBdev3", 00:18:15.994 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:15.994 "is_configured": true, 00:18:15.994 "data_offset": 0, 00:18:15.994 "data_size": 65536 00:18:15.994 }, 00:18:15.994 { 00:18:15.994 "name": "BaseBdev4", 00:18:15.994 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:15.994 "is_configured": true, 00:18:15.994 "data_offset": 0, 00:18:15.994 "data_size": 65536 00:18:15.994 } 00:18:15.994 ] 00:18:15.994 }' 00:18:15.994 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.994 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.994 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.994 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.994 11:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.930 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.931 11:29:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.189 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.190 "name": "raid_bdev1", 00:18:17.190 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:17.190 "strip_size_kb": 64, 00:18:17.190 "state": "online", 00:18:17.190 "raid_level": "raid5f", 00:18:17.190 "superblock": false, 00:18:17.190 "num_base_bdevs": 4, 00:18:17.190 "num_base_bdevs_discovered": 4, 00:18:17.190 "num_base_bdevs_operational": 4, 00:18:17.190 "process": { 00:18:17.190 "type": "rebuild", 00:18:17.190 "target": "spare", 00:18:17.190 "progress": { 00:18:17.190 "blocks": 65280, 00:18:17.190 "percent": 33 00:18:17.190 } 00:18:17.190 }, 00:18:17.190 "base_bdevs_list": [ 00:18:17.190 { 00:18:17.190 "name": "spare", 00:18:17.190 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:17.190 "is_configured": true, 00:18:17.190 "data_offset": 0, 00:18:17.190 "data_size": 65536 00:18:17.190 }, 00:18:17.190 { 00:18:17.190 "name": "BaseBdev2", 00:18:17.190 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:17.190 "is_configured": true, 00:18:17.190 "data_offset": 0, 00:18:17.190 "data_size": 65536 00:18:17.190 }, 00:18:17.190 { 00:18:17.190 "name": "BaseBdev3", 00:18:17.190 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:17.190 "is_configured": true, 00:18:17.190 "data_offset": 0, 00:18:17.190 "data_size": 65536 00:18:17.190 }, 00:18:17.190 { 00:18:17.190 "name": "BaseBdev4", 00:18:17.190 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:17.190 "is_configured": true, 00:18:17.190 "data_offset": 0, 00:18:17.190 "data_size": 65536 00:18:17.190 } 00:18:17.190 ] 00:18:17.190 }' 00:18:17.190 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.190 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.190 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.190 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.190 11:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.126 11:30:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.126 11:30:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.126 "name": "raid_bdev1", 00:18:18.126 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:18.126 "strip_size_kb": 64, 00:18:18.126 "state": "online", 00:18:18.126 "raid_level": "raid5f", 00:18:18.126 "superblock": false, 00:18:18.126 "num_base_bdevs": 4, 00:18:18.126 "num_base_bdevs_discovered": 4, 00:18:18.126 "num_base_bdevs_operational": 4, 00:18:18.126 "process": { 00:18:18.126 "type": "rebuild", 00:18:18.126 "target": "spare", 00:18:18.126 "progress": { 00:18:18.126 "blocks": 88320, 00:18:18.126 "percent": 44 00:18:18.126 } 00:18:18.126 }, 00:18:18.126 "base_bdevs_list": [ 00:18:18.126 { 00:18:18.126 "name": "spare", 00:18:18.126 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:18.126 "is_configured": true, 00:18:18.126 "data_offset": 0, 00:18:18.126 "data_size": 65536 00:18:18.126 }, 00:18:18.126 { 00:18:18.126 "name": "BaseBdev2", 00:18:18.126 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:18.126 "is_configured": true, 00:18:18.126 "data_offset": 0, 00:18:18.126 "data_size": 65536 00:18:18.126 }, 00:18:18.126 { 00:18:18.126 "name": "BaseBdev3", 00:18:18.126 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:18.126 "is_configured": true, 00:18:18.126 "data_offset": 0, 00:18:18.126 "data_size": 65536 00:18:18.126 }, 00:18:18.126 { 00:18:18.126 "name": "BaseBdev4", 00:18:18.126 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:18.126 "is_configured": true, 00:18:18.126 "data_offset": 0, 00:18:18.126 "data_size": 65536 00:18:18.126 } 00:18:18.126 ] 00:18:18.126 }' 00:18:18.126 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.388 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.388 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.388 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.388 11:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.325 "name": "raid_bdev1", 00:18:19.325 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:19.325 "strip_size_kb": 64, 00:18:19.325 "state": "online", 00:18:19.325 "raid_level": "raid5f", 00:18:19.325 "superblock": false, 00:18:19.325 "num_base_bdevs": 4, 00:18:19.325 "num_base_bdevs_discovered": 4, 00:18:19.325 "num_base_bdevs_operational": 4, 00:18:19.325 "process": { 00:18:19.325 "type": "rebuild", 00:18:19.325 "target": "spare", 00:18:19.325 "progress": { 00:18:19.325 "blocks": 109440, 00:18:19.325 "percent": 55 00:18:19.325 } 00:18:19.325 }, 00:18:19.325 "base_bdevs_list": [ 00:18:19.325 { 00:18:19.325 "name": "spare", 00:18:19.325 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:19.325 "is_configured": true, 00:18:19.325 "data_offset": 0, 00:18:19.325 "data_size": 65536 00:18:19.325 }, 00:18:19.325 { 00:18:19.325 "name": "BaseBdev2", 00:18:19.325 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:19.325 "is_configured": true, 00:18:19.325 "data_offset": 0, 00:18:19.325 "data_size": 65536 00:18:19.325 }, 00:18:19.325 { 00:18:19.325 "name": "BaseBdev3", 00:18:19.325 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:19.325 "is_configured": true, 00:18:19.325 "data_offset": 0, 00:18:19.325 "data_size": 65536 00:18:19.325 }, 00:18:19.325 { 00:18:19.325 "name": "BaseBdev4", 00:18:19.325 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:19.325 "is_configured": true, 00:18:19.325 "data_offset": 0, 00:18:19.325 "data_size": 65536 00:18:19.325 } 00:18:19.325 ] 00:18:19.325 }' 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.325 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.631 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.631 11:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.615 "name": "raid_bdev1", 00:18:20.615 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:20.615 "strip_size_kb": 64, 00:18:20.615 "state": "online", 00:18:20.615 "raid_level": "raid5f", 00:18:20.615 "superblock": false, 00:18:20.615 "num_base_bdevs": 4, 00:18:20.615 "num_base_bdevs_discovered": 4, 00:18:20.615 "num_base_bdevs_operational": 4, 00:18:20.615 "process": { 00:18:20.615 "type": "rebuild", 00:18:20.615 "target": "spare", 00:18:20.615 "progress": { 00:18:20.615 "blocks": 130560, 00:18:20.615 "percent": 66 00:18:20.615 } 00:18:20.615 }, 00:18:20.615 "base_bdevs_list": [ 00:18:20.615 { 00:18:20.615 "name": "spare", 00:18:20.615 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:20.615 "is_configured": true, 00:18:20.615 "data_offset": 0, 00:18:20.615 "data_size": 65536 00:18:20.615 }, 00:18:20.615 { 00:18:20.615 "name": "BaseBdev2", 00:18:20.615 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:20.615 "is_configured": true, 00:18:20.615 "data_offset": 0, 00:18:20.615 "data_size": 65536 00:18:20.615 }, 00:18:20.615 { 00:18:20.615 "name": "BaseBdev3", 00:18:20.615 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:20.615 "is_configured": true, 00:18:20.615 "data_offset": 0, 00:18:20.615 "data_size": 65536 00:18:20.615 }, 00:18:20.615 { 00:18:20.615 "name": "BaseBdev4", 00:18:20.615 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:20.615 "is_configured": true, 00:18:20.615 "data_offset": 0, 00:18:20.615 "data_size": 65536 00:18:20.615 } 00:18:20.615 ] 00:18:20.615 }' 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.615 11:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.551 11:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.810 11:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.810 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.810 "name": "raid_bdev1", 00:18:21.810 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:21.810 "strip_size_kb": 64, 00:18:21.810 "state": "online", 00:18:21.810 "raid_level": "raid5f", 00:18:21.810 "superblock": false, 00:18:21.810 "num_base_bdevs": 4, 00:18:21.810 "num_base_bdevs_discovered": 4, 00:18:21.810 "num_base_bdevs_operational": 4, 00:18:21.810 "process": { 00:18:21.810 "type": "rebuild", 00:18:21.810 "target": "spare", 00:18:21.810 "progress": { 00:18:21.810 "blocks": 153600, 00:18:21.810 "percent": 78 00:18:21.810 } 00:18:21.810 }, 00:18:21.810 "base_bdevs_list": [ 00:18:21.810 { 00:18:21.810 "name": "spare", 00:18:21.810 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:21.810 "is_configured": true, 00:18:21.810 "data_offset": 0, 00:18:21.810 "data_size": 65536 00:18:21.810 }, 00:18:21.810 { 00:18:21.810 "name": "BaseBdev2", 00:18:21.810 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:21.810 "is_configured": true, 00:18:21.810 "data_offset": 0, 00:18:21.810 "data_size": 65536 00:18:21.810 }, 00:18:21.810 { 00:18:21.810 "name": "BaseBdev3", 00:18:21.810 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:21.810 "is_configured": true, 00:18:21.810 "data_offset": 0, 00:18:21.810 "data_size": 65536 00:18:21.810 }, 00:18:21.810 { 00:18:21.810 "name": "BaseBdev4", 00:18:21.810 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:21.810 "is_configured": true, 00:18:21.810 "data_offset": 0, 00:18:21.810 "data_size": 65536 00:18:21.810 } 00:18:21.810 ] 00:18:21.810 }' 00:18:21.810 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.810 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.810 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.810 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.810 11:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.745 11:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.003 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.003 "name": "raid_bdev1", 00:18:23.003 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:23.003 "strip_size_kb": 64, 00:18:23.003 "state": "online", 00:18:23.003 "raid_level": "raid5f", 00:18:23.003 "superblock": false, 00:18:23.003 "num_base_bdevs": 4, 00:18:23.003 "num_base_bdevs_discovered": 4, 00:18:23.003 "num_base_bdevs_operational": 4, 00:18:23.003 "process": { 00:18:23.003 "type": "rebuild", 00:18:23.003 "target": "spare", 00:18:23.003 "progress": { 00:18:23.003 "blocks": 174720, 00:18:23.003 "percent": 88 00:18:23.003 } 00:18:23.003 }, 00:18:23.003 "base_bdevs_list": [ 00:18:23.003 { 00:18:23.003 "name": "spare", 00:18:23.003 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:23.003 "is_configured": true, 00:18:23.003 "data_offset": 0, 00:18:23.003 "data_size": 65536 00:18:23.003 }, 00:18:23.003 { 00:18:23.003 "name": "BaseBdev2", 00:18:23.003 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:23.003 "is_configured": true, 00:18:23.003 "data_offset": 0, 00:18:23.003 "data_size": 65536 00:18:23.003 }, 00:18:23.003 { 00:18:23.003 "name": "BaseBdev3", 00:18:23.003 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:23.003 "is_configured": true, 00:18:23.003 "data_offset": 0, 00:18:23.003 "data_size": 65536 00:18:23.003 }, 00:18:23.003 { 00:18:23.003 "name": "BaseBdev4", 00:18:23.003 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:23.003 "is_configured": true, 00:18:23.003 "data_offset": 0, 00:18:23.003 "data_size": 65536 00:18:23.003 } 00:18:23.003 ] 00:18:23.003 }' 00:18:23.003 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.003 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.003 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.003 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.003 11:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.938 [2024-11-15 11:30:06.775808] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:23.938 [2024-11-15 11:30:06.775893] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:23.938 [2024-11-15 11:30:06.775971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.938 "name": "raid_bdev1", 00:18:23.938 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:23.938 "strip_size_kb": 64, 00:18:23.938 "state": "online", 00:18:23.938 "raid_level": "raid5f", 00:18:23.938 "superblock": false, 00:18:23.938 "num_base_bdevs": 4, 00:18:23.938 "num_base_bdevs_discovered": 4, 00:18:23.938 "num_base_bdevs_operational": 4, 00:18:23.938 "base_bdevs_list": [ 00:18:23.938 { 00:18:23.938 "name": "spare", 00:18:23.938 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:23.938 "is_configured": true, 00:18:23.938 "data_offset": 0, 00:18:23.938 "data_size": 65536 00:18:23.938 }, 00:18:23.938 { 00:18:23.938 "name": "BaseBdev2", 00:18:23.938 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:23.938 "is_configured": true, 00:18:23.938 "data_offset": 0, 00:18:23.938 "data_size": 65536 00:18:23.938 }, 00:18:23.938 { 00:18:23.938 "name": "BaseBdev3", 00:18:23.938 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:23.938 "is_configured": true, 00:18:23.938 "data_offset": 0, 00:18:23.938 "data_size": 65536 00:18:23.938 }, 00:18:23.938 { 00:18:23.938 "name": "BaseBdev4", 00:18:23.938 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:23.938 "is_configured": true, 00:18:23.938 "data_offset": 0, 00:18:23.938 "data_size": 65536 00:18:23.938 } 00:18:23.938 ] 00:18:23.938 }' 00:18:23.938 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.197 11:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.197 "name": "raid_bdev1", 00:18:24.197 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:24.197 "strip_size_kb": 64, 00:18:24.197 "state": "online", 00:18:24.197 "raid_level": "raid5f", 00:18:24.197 "superblock": false, 00:18:24.197 "num_base_bdevs": 4, 00:18:24.197 "num_base_bdevs_discovered": 4, 00:18:24.197 "num_base_bdevs_operational": 4, 00:18:24.197 "base_bdevs_list": [ 00:18:24.197 { 00:18:24.197 "name": "spare", 00:18:24.197 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:24.197 "is_configured": true, 00:18:24.197 "data_offset": 0, 00:18:24.197 "data_size": 65536 00:18:24.197 }, 00:18:24.197 { 00:18:24.197 "name": "BaseBdev2", 00:18:24.197 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:24.197 "is_configured": true, 00:18:24.197 "data_offset": 0, 00:18:24.197 "data_size": 65536 00:18:24.197 }, 00:18:24.197 { 00:18:24.197 "name": "BaseBdev3", 00:18:24.197 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:24.197 "is_configured": true, 00:18:24.197 "data_offset": 0, 00:18:24.197 "data_size": 65536 00:18:24.197 }, 00:18:24.197 { 00:18:24.197 "name": "BaseBdev4", 00:18:24.197 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:24.197 "is_configured": true, 00:18:24.197 "data_offset": 0, 00:18:24.197 "data_size": 65536 00:18:24.197 } 00:18:24.197 ] 00:18:24.197 }' 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.197 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.198 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.198 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.198 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.198 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.198 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.456 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.457 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.457 "name": "raid_bdev1", 00:18:24.457 "uuid": "7474d50a-db04-4823-9397-d8c04ca3fcb9", 00:18:24.457 "strip_size_kb": 64, 00:18:24.457 "state": "online", 00:18:24.457 "raid_level": "raid5f", 00:18:24.457 "superblock": false, 00:18:24.457 "num_base_bdevs": 4, 00:18:24.457 "num_base_bdevs_discovered": 4, 00:18:24.457 "num_base_bdevs_operational": 4, 00:18:24.457 "base_bdevs_list": [ 00:18:24.457 { 00:18:24.457 "name": "spare", 00:18:24.457 "uuid": "9f054f4e-2e41-5e4c-8ea2-ac13ae05bbdf", 00:18:24.457 "is_configured": true, 00:18:24.457 "data_offset": 0, 00:18:24.457 "data_size": 65536 00:18:24.457 }, 00:18:24.457 { 00:18:24.457 "name": "BaseBdev2", 00:18:24.457 "uuid": "c485db79-37c2-595e-8c37-4dc812dbb192", 00:18:24.457 "is_configured": true, 00:18:24.457 "data_offset": 0, 00:18:24.457 "data_size": 65536 00:18:24.457 }, 00:18:24.457 { 00:18:24.457 "name": "BaseBdev3", 00:18:24.457 "uuid": "44627e98-de6f-56cf-bcb4-26d5913d96cf", 00:18:24.457 "is_configured": true, 00:18:24.457 "data_offset": 0, 00:18:24.457 "data_size": 65536 00:18:24.457 }, 00:18:24.457 { 00:18:24.457 "name": "BaseBdev4", 00:18:24.457 "uuid": "20b04796-4424-5e5b-9a1e-e31923ee5622", 00:18:24.457 "is_configured": true, 00:18:24.457 "data_offset": 0, 00:18:24.457 "data_size": 65536 00:18:24.457 } 00:18:24.457 ] 00:18:24.457 }' 00:18:24.457 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.457 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.023 [2024-11-15 11:30:07.671446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.023 [2024-11-15 11:30:07.671500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.023 [2024-11-15 11:30:07.671661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.023 [2024-11-15 11:30:07.671795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.023 [2024-11-15 11:30:07.671813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:25.023 11:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:25.282 /dev/nbd0 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:25.282 1+0 records in 00:18:25.282 1+0 records out 00:18:25.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381684 s, 10.7 MB/s 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:25.282 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:25.540 /dev/nbd1 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:25.540 1+0 records in 00:18:25.540 1+0 records out 00:18:25.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042786 s, 9.6 MB/s 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:25.540 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:25.798 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:25.798 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.798 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.798 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.798 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:25.798 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.798 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.057 11:30:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84881 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 84881 ']' 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 84881 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84881 00:18:26.627 killing process with pid 84881 00:18:26.627 Received shutdown signal, test time was about 60.000000 seconds 00:18:26.627 00:18:26.627 Latency(us) 00:18:26.627 [2024-11-15T11:30:09.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.627 [2024-11-15T11:30:09.577Z] =================================================================================================================== 00:18:26.627 [2024-11-15T11:30:09.577Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84881' 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 84881 00:18:26.627 11:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 84881 00:18:26.627 [2024-11-15 11:30:09.331066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.886 [2024-11-15 11:30:09.777914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:28.262 ************************************ 00:18:28.262 END TEST raid5f_rebuild_test 00:18:28.262 ************************************ 00:18:28.262 00:18:28.262 real 0m20.164s 00:18:28.262 user 0m24.976s 00:18:28.262 sys 0m2.404s 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.262 11:30:10 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:28.262 11:30:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:28.262 11:30:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:28.262 11:30:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.262 ************************************ 00:18:28.262 START TEST raid5f_rebuild_test_sb 00:18:28.262 ************************************ 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:28.262 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:28.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85391 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85391 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85391 ']' 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:28.263 11:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.263 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:28.263 Zero copy mechanism will not be used. 00:18:28.263 [2024-11-15 11:30:10.981059] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:18:28.263 [2024-11-15 11:30:10.981274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85391 ] 00:18:28.263 [2024-11-15 11:30:11.157274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.523 [2024-11-15 11:30:11.299018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.783 [2024-11-15 11:30:11.519901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.783 [2024-11-15 11:30:11.519969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.041 11:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:29.041 11:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:29.041 11:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:29.041 11:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:29.041 11:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.041 11:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.298 BaseBdev1_malloc 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.298 [2024-11-15 11:30:12.022745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:29.298 [2024-11-15 11:30:12.022996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.298 [2024-11-15 11:30:12.023038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:29.298 [2024-11-15 11:30:12.023059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.298 [2024-11-15 11:30:12.026036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.298 [2024-11-15 11:30:12.026258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:29.298 BaseBdev1 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.298 BaseBdev2_malloc 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.298 [2024-11-15 11:30:12.081014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:29.298 [2024-11-15 11:30:12.081104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.298 [2024-11-15 11:30:12.081134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:29.298 [2024-11-15 11:30:12.081152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.298 [2024-11-15 11:30:12.084137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.298 [2024-11-15 11:30:12.084251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:29.298 BaseBdev2 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.298 BaseBdev3_malloc 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.298 [2024-11-15 11:30:12.150302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:29.298 [2024-11-15 11:30:12.150374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.298 [2024-11-15 11:30:12.150407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:29.298 [2024-11-15 11:30:12.150427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.298 [2024-11-15 11:30:12.153525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.298 [2024-11-15 11:30:12.153620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:29.298 BaseBdev3 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.298 BaseBdev4_malloc 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:29.298 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.299 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.299 [2024-11-15 11:30:12.211295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:29.299 [2024-11-15 11:30:12.211374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.299 [2024-11-15 11:30:12.211405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:29.299 [2024-11-15 11:30:12.211424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.299 [2024-11-15 11:30:12.214343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.299 [2024-11-15 11:30:12.214521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:29.299 BaseBdev4 00:18:29.299 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.299 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:29.299 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.299 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.557 spare_malloc 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.557 spare_delay 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.557 [2024-11-15 11:30:12.278953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:29.557 [2024-11-15 11:30:12.279054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.557 [2024-11-15 11:30:12.279081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:29.557 [2024-11-15 11:30:12.279098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.557 [2024-11-15 11:30:12.281988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.557 [2024-11-15 11:30:12.282249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:29.557 spare 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.557 [2024-11-15 11:30:12.291122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.557 [2024-11-15 11:30:12.293723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.557 [2024-11-15 11:30:12.293809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.557 [2024-11-15 11:30:12.293891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:29.557 [2024-11-15 11:30:12.294172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:29.557 [2024-11-15 11:30:12.294232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:29.557 [2024-11-15 11:30:12.294570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:29.557 [2024-11-15 11:30:12.301334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:29.557 [2024-11-15 11:30:12.301498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:29.557 [2024-11-15 11:30:12.301876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.557 "name": "raid_bdev1", 00:18:29.557 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:29.557 "strip_size_kb": 64, 00:18:29.557 "state": "online", 00:18:29.557 "raid_level": "raid5f", 00:18:29.557 "superblock": true, 00:18:29.557 "num_base_bdevs": 4, 00:18:29.557 "num_base_bdevs_discovered": 4, 00:18:29.557 "num_base_bdevs_operational": 4, 00:18:29.557 "base_bdevs_list": [ 00:18:29.557 { 00:18:29.557 "name": "BaseBdev1", 00:18:29.557 "uuid": "16dc0d57-7bf6-5e84-a409-2cbcec927661", 00:18:29.557 "is_configured": true, 00:18:29.557 "data_offset": 2048, 00:18:29.557 "data_size": 63488 00:18:29.557 }, 00:18:29.557 { 00:18:29.557 "name": "BaseBdev2", 00:18:29.557 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:29.557 "is_configured": true, 00:18:29.557 "data_offset": 2048, 00:18:29.557 "data_size": 63488 00:18:29.557 }, 00:18:29.557 { 00:18:29.557 "name": "BaseBdev3", 00:18:29.557 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:29.557 "is_configured": true, 00:18:29.557 "data_offset": 2048, 00:18:29.557 "data_size": 63488 00:18:29.557 }, 00:18:29.557 { 00:18:29.557 "name": "BaseBdev4", 00:18:29.557 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:29.557 "is_configured": true, 00:18:29.557 "data_offset": 2048, 00:18:29.557 "data_size": 63488 00:18:29.557 } 00:18:29.557 ] 00:18:29.557 }' 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.557 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.124 [2024-11-15 11:30:12.802411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:30.124 11:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:30.382 [2024-11-15 11:30:13.178273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:30.382 /dev/nbd0 00:18:30.382 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:30.382 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:30.382 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:30.383 1+0 records in 00:18:30.383 1+0 records out 00:18:30.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312256 s, 13.1 MB/s 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:30.383 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:30.949 496+0 records in 00:18:30.949 496+0 records out 00:18:30.949 97517568 bytes (98 MB, 93 MiB) copied, 0.588623 s, 166 MB/s 00:18:30.949 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:30.949 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.949 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:30.949 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:30.949 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:30.949 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.949 11:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:31.208 [2024-11-15 11:30:14.117587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.208 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.466 [2024-11-15 11:30:14.161790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.466 "name": "raid_bdev1", 00:18:31.466 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:31.466 "strip_size_kb": 64, 00:18:31.466 "state": "online", 00:18:31.466 "raid_level": "raid5f", 00:18:31.466 "superblock": true, 00:18:31.466 "num_base_bdevs": 4, 00:18:31.466 "num_base_bdevs_discovered": 3, 00:18:31.466 "num_base_bdevs_operational": 3, 00:18:31.466 "base_bdevs_list": [ 00:18:31.466 { 00:18:31.466 "name": null, 00:18:31.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.466 "is_configured": false, 00:18:31.466 "data_offset": 0, 00:18:31.466 "data_size": 63488 00:18:31.466 }, 00:18:31.466 { 00:18:31.466 "name": "BaseBdev2", 00:18:31.466 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:31.466 "is_configured": true, 00:18:31.466 "data_offset": 2048, 00:18:31.466 "data_size": 63488 00:18:31.466 }, 00:18:31.466 { 00:18:31.466 "name": "BaseBdev3", 00:18:31.466 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:31.466 "is_configured": true, 00:18:31.466 "data_offset": 2048, 00:18:31.466 "data_size": 63488 00:18:31.466 }, 00:18:31.466 { 00:18:31.466 "name": "BaseBdev4", 00:18:31.466 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:31.466 "is_configured": true, 00:18:31.466 "data_offset": 2048, 00:18:31.466 "data_size": 63488 00:18:31.466 } 00:18:31.466 ] 00:18:31.466 }' 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.466 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.724 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:31.724 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.724 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.982 [2024-11-15 11:30:14.677943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.982 [2024-11-15 11:30:14.692653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:31.982 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.982 11:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:31.982 [2024-11-15 11:30:14.701825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.934 "name": "raid_bdev1", 00:18:32.934 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:32.934 "strip_size_kb": 64, 00:18:32.934 "state": "online", 00:18:32.934 "raid_level": "raid5f", 00:18:32.934 "superblock": true, 00:18:32.934 "num_base_bdevs": 4, 00:18:32.934 "num_base_bdevs_discovered": 4, 00:18:32.934 "num_base_bdevs_operational": 4, 00:18:32.934 "process": { 00:18:32.934 "type": "rebuild", 00:18:32.934 "target": "spare", 00:18:32.934 "progress": { 00:18:32.934 "blocks": 17280, 00:18:32.934 "percent": 9 00:18:32.934 } 00:18:32.934 }, 00:18:32.934 "base_bdevs_list": [ 00:18:32.934 { 00:18:32.934 "name": "spare", 00:18:32.934 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:32.934 "is_configured": true, 00:18:32.934 "data_offset": 2048, 00:18:32.934 "data_size": 63488 00:18:32.934 }, 00:18:32.934 { 00:18:32.934 "name": "BaseBdev2", 00:18:32.934 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:32.934 "is_configured": true, 00:18:32.934 "data_offset": 2048, 00:18:32.934 "data_size": 63488 00:18:32.934 }, 00:18:32.934 { 00:18:32.934 "name": "BaseBdev3", 00:18:32.934 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:32.934 "is_configured": true, 00:18:32.934 "data_offset": 2048, 00:18:32.934 "data_size": 63488 00:18:32.934 }, 00:18:32.934 { 00:18:32.934 "name": "BaseBdev4", 00:18:32.934 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:32.934 "is_configured": true, 00:18:32.934 "data_offset": 2048, 00:18:32.934 "data_size": 63488 00:18:32.934 } 00:18:32.934 ] 00:18:32.934 }' 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.934 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.934 [2024-11-15 11:30:15.863162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.192 [2024-11-15 11:30:15.914387] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.192 [2024-11-15 11:30:15.914631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.192 [2024-11-15 11:30:15.914663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.192 [2024-11-15 11:30:15.914682] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.192 11:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.192 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.192 "name": "raid_bdev1", 00:18:33.192 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:33.192 "strip_size_kb": 64, 00:18:33.192 "state": "online", 00:18:33.192 "raid_level": "raid5f", 00:18:33.192 "superblock": true, 00:18:33.192 "num_base_bdevs": 4, 00:18:33.192 "num_base_bdevs_discovered": 3, 00:18:33.192 "num_base_bdevs_operational": 3, 00:18:33.192 "base_bdevs_list": [ 00:18:33.192 { 00:18:33.192 "name": null, 00:18:33.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.192 "is_configured": false, 00:18:33.192 "data_offset": 0, 00:18:33.192 "data_size": 63488 00:18:33.192 }, 00:18:33.192 { 00:18:33.192 "name": "BaseBdev2", 00:18:33.192 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:33.192 "is_configured": true, 00:18:33.192 "data_offset": 2048, 00:18:33.192 "data_size": 63488 00:18:33.192 }, 00:18:33.192 { 00:18:33.192 "name": "BaseBdev3", 00:18:33.192 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:33.192 "is_configured": true, 00:18:33.192 "data_offset": 2048, 00:18:33.192 "data_size": 63488 00:18:33.192 }, 00:18:33.192 { 00:18:33.192 "name": "BaseBdev4", 00:18:33.192 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:33.192 "is_configured": true, 00:18:33.192 "data_offset": 2048, 00:18:33.193 "data_size": 63488 00:18:33.193 } 00:18:33.193 ] 00:18:33.193 }' 00:18:33.193 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.193 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.761 "name": "raid_bdev1", 00:18:33.761 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:33.761 "strip_size_kb": 64, 00:18:33.761 "state": "online", 00:18:33.761 "raid_level": "raid5f", 00:18:33.761 "superblock": true, 00:18:33.761 "num_base_bdevs": 4, 00:18:33.761 "num_base_bdevs_discovered": 3, 00:18:33.761 "num_base_bdevs_operational": 3, 00:18:33.761 "base_bdevs_list": [ 00:18:33.761 { 00:18:33.761 "name": null, 00:18:33.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.761 "is_configured": false, 00:18:33.761 "data_offset": 0, 00:18:33.761 "data_size": 63488 00:18:33.761 }, 00:18:33.761 { 00:18:33.761 "name": "BaseBdev2", 00:18:33.761 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:33.761 "is_configured": true, 00:18:33.761 "data_offset": 2048, 00:18:33.761 "data_size": 63488 00:18:33.761 }, 00:18:33.761 { 00:18:33.761 "name": "BaseBdev3", 00:18:33.761 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:33.761 "is_configured": true, 00:18:33.761 "data_offset": 2048, 00:18:33.761 "data_size": 63488 00:18:33.761 }, 00:18:33.761 { 00:18:33.761 "name": "BaseBdev4", 00:18:33.761 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:33.761 "is_configured": true, 00:18:33.761 "data_offset": 2048, 00:18:33.761 "data_size": 63488 00:18:33.761 } 00:18:33.761 ] 00:18:33.761 }' 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.761 [2024-11-15 11:30:16.603687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.761 [2024-11-15 11:30:16.617694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.761 11:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:33.761 [2024-11-15 11:30:16.626803] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.724 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.724 "name": "raid_bdev1", 00:18:34.724 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:34.724 "strip_size_kb": 64, 00:18:34.724 "state": "online", 00:18:34.724 "raid_level": "raid5f", 00:18:34.724 "superblock": true, 00:18:34.724 "num_base_bdevs": 4, 00:18:34.724 "num_base_bdevs_discovered": 4, 00:18:34.724 "num_base_bdevs_operational": 4, 00:18:34.724 "process": { 00:18:34.724 "type": "rebuild", 00:18:34.724 "target": "spare", 00:18:34.724 "progress": { 00:18:34.724 "blocks": 17280, 00:18:34.724 "percent": 9 00:18:34.724 } 00:18:34.724 }, 00:18:34.724 "base_bdevs_list": [ 00:18:34.724 { 00:18:34.724 "name": "spare", 00:18:34.724 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:34.724 "is_configured": true, 00:18:34.724 "data_offset": 2048, 00:18:34.724 "data_size": 63488 00:18:34.724 }, 00:18:34.724 { 00:18:34.724 "name": "BaseBdev2", 00:18:34.724 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:34.724 "is_configured": true, 00:18:34.724 "data_offset": 2048, 00:18:34.724 "data_size": 63488 00:18:34.724 }, 00:18:34.724 { 00:18:34.724 "name": "BaseBdev3", 00:18:34.724 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:34.724 "is_configured": true, 00:18:34.724 "data_offset": 2048, 00:18:34.724 "data_size": 63488 00:18:34.724 }, 00:18:34.724 { 00:18:34.724 "name": "BaseBdev4", 00:18:34.724 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:34.724 "is_configured": true, 00:18:34.724 "data_offset": 2048, 00:18:34.724 "data_size": 63488 00:18:34.724 } 00:18:34.724 ] 00:18:34.724 }' 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:34.982 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=694 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.982 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.982 "name": "raid_bdev1", 00:18:34.982 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:34.982 "strip_size_kb": 64, 00:18:34.982 "state": "online", 00:18:34.982 "raid_level": "raid5f", 00:18:34.982 "superblock": true, 00:18:34.982 "num_base_bdevs": 4, 00:18:34.982 "num_base_bdevs_discovered": 4, 00:18:34.982 "num_base_bdevs_operational": 4, 00:18:34.982 "process": { 00:18:34.982 "type": "rebuild", 00:18:34.982 "target": "spare", 00:18:34.982 "progress": { 00:18:34.982 "blocks": 21120, 00:18:34.982 "percent": 11 00:18:34.982 } 00:18:34.982 }, 00:18:34.982 "base_bdevs_list": [ 00:18:34.982 { 00:18:34.982 "name": "spare", 00:18:34.982 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:34.982 "is_configured": true, 00:18:34.982 "data_offset": 2048, 00:18:34.982 "data_size": 63488 00:18:34.982 }, 00:18:34.982 { 00:18:34.982 "name": "BaseBdev2", 00:18:34.982 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:34.982 "is_configured": true, 00:18:34.982 "data_offset": 2048, 00:18:34.982 "data_size": 63488 00:18:34.982 }, 00:18:34.982 { 00:18:34.982 "name": "BaseBdev3", 00:18:34.982 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:34.982 "is_configured": true, 00:18:34.982 "data_offset": 2048, 00:18:34.982 "data_size": 63488 00:18:34.982 }, 00:18:34.982 { 00:18:34.983 "name": "BaseBdev4", 00:18:34.983 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:34.983 "is_configured": true, 00:18:34.983 "data_offset": 2048, 00:18:34.983 "data_size": 63488 00:18:34.983 } 00:18:34.983 ] 00:18:34.983 }' 00:18:34.983 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.983 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.983 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.241 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.241 11:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.176 "name": "raid_bdev1", 00:18:36.176 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:36.176 "strip_size_kb": 64, 00:18:36.176 "state": "online", 00:18:36.176 "raid_level": "raid5f", 00:18:36.176 "superblock": true, 00:18:36.176 "num_base_bdevs": 4, 00:18:36.176 "num_base_bdevs_discovered": 4, 00:18:36.176 "num_base_bdevs_operational": 4, 00:18:36.176 "process": { 00:18:36.176 "type": "rebuild", 00:18:36.176 "target": "spare", 00:18:36.176 "progress": { 00:18:36.176 "blocks": 42240, 00:18:36.176 "percent": 22 00:18:36.176 } 00:18:36.176 }, 00:18:36.176 "base_bdevs_list": [ 00:18:36.176 { 00:18:36.176 "name": "spare", 00:18:36.176 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:36.176 "is_configured": true, 00:18:36.176 "data_offset": 2048, 00:18:36.176 "data_size": 63488 00:18:36.176 }, 00:18:36.176 { 00:18:36.176 "name": "BaseBdev2", 00:18:36.176 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:36.176 "is_configured": true, 00:18:36.176 "data_offset": 2048, 00:18:36.176 "data_size": 63488 00:18:36.176 }, 00:18:36.176 { 00:18:36.176 "name": "BaseBdev3", 00:18:36.176 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:36.176 "is_configured": true, 00:18:36.176 "data_offset": 2048, 00:18:36.176 "data_size": 63488 00:18:36.176 }, 00:18:36.176 { 00:18:36.176 "name": "BaseBdev4", 00:18:36.176 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:36.176 "is_configured": true, 00:18:36.176 "data_offset": 2048, 00:18:36.176 "data_size": 63488 00:18:36.176 } 00:18:36.176 ] 00:18:36.176 }' 00:18:36.176 11:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.176 11:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.176 11:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.176 11:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.176 11:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.550 "name": "raid_bdev1", 00:18:37.550 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:37.550 "strip_size_kb": 64, 00:18:37.550 "state": "online", 00:18:37.550 "raid_level": "raid5f", 00:18:37.550 "superblock": true, 00:18:37.550 "num_base_bdevs": 4, 00:18:37.550 "num_base_bdevs_discovered": 4, 00:18:37.550 "num_base_bdevs_operational": 4, 00:18:37.550 "process": { 00:18:37.550 "type": "rebuild", 00:18:37.550 "target": "spare", 00:18:37.550 "progress": { 00:18:37.550 "blocks": 65280, 00:18:37.550 "percent": 34 00:18:37.550 } 00:18:37.550 }, 00:18:37.550 "base_bdevs_list": [ 00:18:37.550 { 00:18:37.550 "name": "spare", 00:18:37.550 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:37.550 "is_configured": true, 00:18:37.550 "data_offset": 2048, 00:18:37.550 "data_size": 63488 00:18:37.550 }, 00:18:37.550 { 00:18:37.550 "name": "BaseBdev2", 00:18:37.550 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:37.550 "is_configured": true, 00:18:37.550 "data_offset": 2048, 00:18:37.550 "data_size": 63488 00:18:37.550 }, 00:18:37.550 { 00:18:37.550 "name": "BaseBdev3", 00:18:37.550 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:37.550 "is_configured": true, 00:18:37.550 "data_offset": 2048, 00:18:37.550 "data_size": 63488 00:18:37.550 }, 00:18:37.550 { 00:18:37.550 "name": "BaseBdev4", 00:18:37.550 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:37.550 "is_configured": true, 00:18:37.550 "data_offset": 2048, 00:18:37.550 "data_size": 63488 00:18:37.550 } 00:18:37.550 ] 00:18:37.550 }' 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.550 11:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.485 "name": "raid_bdev1", 00:18:38.485 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:38.485 "strip_size_kb": 64, 00:18:38.485 "state": "online", 00:18:38.485 "raid_level": "raid5f", 00:18:38.485 "superblock": true, 00:18:38.485 "num_base_bdevs": 4, 00:18:38.485 "num_base_bdevs_discovered": 4, 00:18:38.485 "num_base_bdevs_operational": 4, 00:18:38.485 "process": { 00:18:38.485 "type": "rebuild", 00:18:38.485 "target": "spare", 00:18:38.485 "progress": { 00:18:38.485 "blocks": 86400, 00:18:38.485 "percent": 45 00:18:38.485 } 00:18:38.485 }, 00:18:38.485 "base_bdevs_list": [ 00:18:38.485 { 00:18:38.485 "name": "spare", 00:18:38.485 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:38.485 "is_configured": true, 00:18:38.485 "data_offset": 2048, 00:18:38.485 "data_size": 63488 00:18:38.485 }, 00:18:38.485 { 00:18:38.485 "name": "BaseBdev2", 00:18:38.485 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:38.485 "is_configured": true, 00:18:38.485 "data_offset": 2048, 00:18:38.485 "data_size": 63488 00:18:38.485 }, 00:18:38.485 { 00:18:38.485 "name": "BaseBdev3", 00:18:38.485 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:38.485 "is_configured": true, 00:18:38.485 "data_offset": 2048, 00:18:38.485 "data_size": 63488 00:18:38.485 }, 00:18:38.485 { 00:18:38.485 "name": "BaseBdev4", 00:18:38.485 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:38.485 "is_configured": true, 00:18:38.485 "data_offset": 2048, 00:18:38.485 "data_size": 63488 00:18:38.485 } 00:18:38.485 ] 00:18:38.485 }' 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.485 11:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.860 "name": "raid_bdev1", 00:18:39.860 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:39.860 "strip_size_kb": 64, 00:18:39.860 "state": "online", 00:18:39.860 "raid_level": "raid5f", 00:18:39.860 "superblock": true, 00:18:39.860 "num_base_bdevs": 4, 00:18:39.860 "num_base_bdevs_discovered": 4, 00:18:39.860 "num_base_bdevs_operational": 4, 00:18:39.860 "process": { 00:18:39.860 "type": "rebuild", 00:18:39.860 "target": "spare", 00:18:39.860 "progress": { 00:18:39.860 "blocks": 109440, 00:18:39.860 "percent": 57 00:18:39.860 } 00:18:39.860 }, 00:18:39.860 "base_bdevs_list": [ 00:18:39.860 { 00:18:39.860 "name": "spare", 00:18:39.860 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:39.860 "is_configured": true, 00:18:39.860 "data_offset": 2048, 00:18:39.860 "data_size": 63488 00:18:39.860 }, 00:18:39.860 { 00:18:39.860 "name": "BaseBdev2", 00:18:39.860 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:39.860 "is_configured": true, 00:18:39.860 "data_offset": 2048, 00:18:39.860 "data_size": 63488 00:18:39.860 }, 00:18:39.860 { 00:18:39.860 "name": "BaseBdev3", 00:18:39.860 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:39.860 "is_configured": true, 00:18:39.860 "data_offset": 2048, 00:18:39.860 "data_size": 63488 00:18:39.860 }, 00:18:39.860 { 00:18:39.860 "name": "BaseBdev4", 00:18:39.860 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:39.860 "is_configured": true, 00:18:39.860 "data_offset": 2048, 00:18:39.860 "data_size": 63488 00:18:39.860 } 00:18:39.860 ] 00:18:39.860 }' 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.860 11:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.808 "name": "raid_bdev1", 00:18:40.808 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:40.808 "strip_size_kb": 64, 00:18:40.808 "state": "online", 00:18:40.808 "raid_level": "raid5f", 00:18:40.808 "superblock": true, 00:18:40.808 "num_base_bdevs": 4, 00:18:40.808 "num_base_bdevs_discovered": 4, 00:18:40.808 "num_base_bdevs_operational": 4, 00:18:40.808 "process": { 00:18:40.808 "type": "rebuild", 00:18:40.808 "target": "spare", 00:18:40.808 "progress": { 00:18:40.808 "blocks": 130560, 00:18:40.808 "percent": 68 00:18:40.808 } 00:18:40.808 }, 00:18:40.808 "base_bdevs_list": [ 00:18:40.808 { 00:18:40.808 "name": "spare", 00:18:40.808 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:40.808 "is_configured": true, 00:18:40.808 "data_offset": 2048, 00:18:40.808 "data_size": 63488 00:18:40.808 }, 00:18:40.808 { 00:18:40.808 "name": "BaseBdev2", 00:18:40.808 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:40.808 "is_configured": true, 00:18:40.808 "data_offset": 2048, 00:18:40.808 "data_size": 63488 00:18:40.808 }, 00:18:40.808 { 00:18:40.808 "name": "BaseBdev3", 00:18:40.808 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:40.808 "is_configured": true, 00:18:40.808 "data_offset": 2048, 00:18:40.808 "data_size": 63488 00:18:40.808 }, 00:18:40.808 { 00:18:40.808 "name": "BaseBdev4", 00:18:40.808 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:40.808 "is_configured": true, 00:18:40.808 "data_offset": 2048, 00:18:40.808 "data_size": 63488 00:18:40.808 } 00:18:40.808 ] 00:18:40.808 }' 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.808 11:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.192 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.192 "name": "raid_bdev1", 00:18:42.192 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:42.192 "strip_size_kb": 64, 00:18:42.192 "state": "online", 00:18:42.192 "raid_level": "raid5f", 00:18:42.192 "superblock": true, 00:18:42.192 "num_base_bdevs": 4, 00:18:42.192 "num_base_bdevs_discovered": 4, 00:18:42.192 "num_base_bdevs_operational": 4, 00:18:42.192 "process": { 00:18:42.192 "type": "rebuild", 00:18:42.192 "target": "spare", 00:18:42.192 "progress": { 00:18:42.192 "blocks": 153600, 00:18:42.192 "percent": 80 00:18:42.192 } 00:18:42.192 }, 00:18:42.192 "base_bdevs_list": [ 00:18:42.192 { 00:18:42.192 "name": "spare", 00:18:42.192 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:42.192 "is_configured": true, 00:18:42.193 "data_offset": 2048, 00:18:42.193 "data_size": 63488 00:18:42.193 }, 00:18:42.193 { 00:18:42.193 "name": "BaseBdev2", 00:18:42.193 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:42.193 "is_configured": true, 00:18:42.193 "data_offset": 2048, 00:18:42.193 "data_size": 63488 00:18:42.193 }, 00:18:42.193 { 00:18:42.193 "name": "BaseBdev3", 00:18:42.193 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:42.193 "is_configured": true, 00:18:42.193 "data_offset": 2048, 00:18:42.193 "data_size": 63488 00:18:42.193 }, 00:18:42.193 { 00:18:42.193 "name": "BaseBdev4", 00:18:42.193 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:42.193 "is_configured": true, 00:18:42.193 "data_offset": 2048, 00:18:42.193 "data_size": 63488 00:18:42.193 } 00:18:42.193 ] 00:18:42.193 }' 00:18:42.193 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.193 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.193 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.193 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.193 11:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.129 "name": "raid_bdev1", 00:18:43.129 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:43.129 "strip_size_kb": 64, 00:18:43.129 "state": "online", 00:18:43.129 "raid_level": "raid5f", 00:18:43.129 "superblock": true, 00:18:43.129 "num_base_bdevs": 4, 00:18:43.129 "num_base_bdevs_discovered": 4, 00:18:43.129 "num_base_bdevs_operational": 4, 00:18:43.129 "process": { 00:18:43.129 "type": "rebuild", 00:18:43.129 "target": "spare", 00:18:43.129 "progress": { 00:18:43.129 "blocks": 176640, 00:18:43.129 "percent": 92 00:18:43.129 } 00:18:43.129 }, 00:18:43.129 "base_bdevs_list": [ 00:18:43.129 { 00:18:43.129 "name": "spare", 00:18:43.129 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:43.129 "is_configured": true, 00:18:43.129 "data_offset": 2048, 00:18:43.129 "data_size": 63488 00:18:43.129 }, 00:18:43.129 { 00:18:43.129 "name": "BaseBdev2", 00:18:43.129 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:43.129 "is_configured": true, 00:18:43.129 "data_offset": 2048, 00:18:43.129 "data_size": 63488 00:18:43.129 }, 00:18:43.129 { 00:18:43.129 "name": "BaseBdev3", 00:18:43.129 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:43.129 "is_configured": true, 00:18:43.129 "data_offset": 2048, 00:18:43.129 "data_size": 63488 00:18:43.129 }, 00:18:43.129 { 00:18:43.129 "name": "BaseBdev4", 00:18:43.129 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:43.129 "is_configured": true, 00:18:43.129 "data_offset": 2048, 00:18:43.129 "data_size": 63488 00:18:43.129 } 00:18:43.129 ] 00:18:43.129 }' 00:18:43.129 11:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.129 11:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.129 11:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.388 11:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.388 11:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.956 [2024-11-15 11:30:26.723147] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:43.956 [2024-11-15 11:30:26.723550] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:43.956 [2024-11-15 11:30:26.723783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.215 "name": "raid_bdev1", 00:18:44.215 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:44.215 "strip_size_kb": 64, 00:18:44.215 "state": "online", 00:18:44.215 "raid_level": "raid5f", 00:18:44.215 "superblock": true, 00:18:44.215 "num_base_bdevs": 4, 00:18:44.215 "num_base_bdevs_discovered": 4, 00:18:44.215 "num_base_bdevs_operational": 4, 00:18:44.215 "base_bdevs_list": [ 00:18:44.215 { 00:18:44.215 "name": "spare", 00:18:44.215 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:44.215 "is_configured": true, 00:18:44.215 "data_offset": 2048, 00:18:44.215 "data_size": 63488 00:18:44.215 }, 00:18:44.215 { 00:18:44.215 "name": "BaseBdev2", 00:18:44.215 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:44.215 "is_configured": true, 00:18:44.215 "data_offset": 2048, 00:18:44.215 "data_size": 63488 00:18:44.215 }, 00:18:44.215 { 00:18:44.215 "name": "BaseBdev3", 00:18:44.215 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:44.215 "is_configured": true, 00:18:44.215 "data_offset": 2048, 00:18:44.215 "data_size": 63488 00:18:44.215 }, 00:18:44.215 { 00:18:44.215 "name": "BaseBdev4", 00:18:44.215 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:44.215 "is_configured": true, 00:18:44.215 "data_offset": 2048, 00:18:44.215 "data_size": 63488 00:18:44.215 } 00:18:44.215 ] 00:18:44.215 }' 00:18:44.215 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.475 "name": "raid_bdev1", 00:18:44.475 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:44.475 "strip_size_kb": 64, 00:18:44.475 "state": "online", 00:18:44.475 "raid_level": "raid5f", 00:18:44.475 "superblock": true, 00:18:44.475 "num_base_bdevs": 4, 00:18:44.475 "num_base_bdevs_discovered": 4, 00:18:44.475 "num_base_bdevs_operational": 4, 00:18:44.475 "base_bdevs_list": [ 00:18:44.475 { 00:18:44.475 "name": "spare", 00:18:44.475 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:44.475 "is_configured": true, 00:18:44.475 "data_offset": 2048, 00:18:44.475 "data_size": 63488 00:18:44.475 }, 00:18:44.475 { 00:18:44.475 "name": "BaseBdev2", 00:18:44.475 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:44.475 "is_configured": true, 00:18:44.475 "data_offset": 2048, 00:18:44.475 "data_size": 63488 00:18:44.475 }, 00:18:44.475 { 00:18:44.475 "name": "BaseBdev3", 00:18:44.475 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:44.475 "is_configured": true, 00:18:44.475 "data_offset": 2048, 00:18:44.475 "data_size": 63488 00:18:44.475 }, 00:18:44.475 { 00:18:44.475 "name": "BaseBdev4", 00:18:44.475 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:44.475 "is_configured": true, 00:18:44.475 "data_offset": 2048, 00:18:44.475 "data_size": 63488 00:18:44.475 } 00:18:44.475 ] 00:18:44.475 }' 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.475 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.734 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.734 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.734 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.734 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.734 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.734 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.734 "name": "raid_bdev1", 00:18:44.734 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:44.734 "strip_size_kb": 64, 00:18:44.734 "state": "online", 00:18:44.734 "raid_level": "raid5f", 00:18:44.734 "superblock": true, 00:18:44.734 "num_base_bdevs": 4, 00:18:44.734 "num_base_bdevs_discovered": 4, 00:18:44.734 "num_base_bdevs_operational": 4, 00:18:44.734 "base_bdevs_list": [ 00:18:44.734 { 00:18:44.734 "name": "spare", 00:18:44.734 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:44.734 "is_configured": true, 00:18:44.734 "data_offset": 2048, 00:18:44.734 "data_size": 63488 00:18:44.734 }, 00:18:44.734 { 00:18:44.734 "name": "BaseBdev2", 00:18:44.734 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:44.734 "is_configured": true, 00:18:44.734 "data_offset": 2048, 00:18:44.734 "data_size": 63488 00:18:44.734 }, 00:18:44.734 { 00:18:44.734 "name": "BaseBdev3", 00:18:44.734 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:44.734 "is_configured": true, 00:18:44.734 "data_offset": 2048, 00:18:44.734 "data_size": 63488 00:18:44.735 }, 00:18:44.735 { 00:18:44.735 "name": "BaseBdev4", 00:18:44.735 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:44.735 "is_configured": true, 00:18:44.735 "data_offset": 2048, 00:18:44.735 "data_size": 63488 00:18:44.735 } 00:18:44.735 ] 00:18:44.735 }' 00:18:44.735 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.735 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.302 [2024-11-15 11:30:27.955290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.302 [2024-11-15 11:30:27.955332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.302 [2024-11-15 11:30:27.955441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.302 [2024-11-15 11:30:27.955644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.302 [2024-11-15 11:30:27.955674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.302 11:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.302 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:45.302 /dev/nbd0 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.561 1+0 records in 00:18:45.561 1+0 records out 00:18:45.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644617 s, 6.4 MB/s 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.561 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:45.820 /dev/nbd1 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.820 1+0 records in 00:18:45.820 1+0 records out 00:18:45.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270804 s, 15.1 MB/s 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.820 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:46.078 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:46.078 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.078 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:46.078 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.078 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:46.078 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.078 11:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.338 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.597 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.597 [2024-11-15 11:30:29.405063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.597 [2024-11-15 11:30:29.405140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.597 [2024-11-15 11:30:29.405172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:46.597 [2024-11-15 11:30:29.405251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.597 [2024-11-15 11:30:29.408412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.597 [2024-11-15 11:30:29.408456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.597 [2024-11-15 11:30:29.408598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:46.597 [2024-11-15 11:30:29.408662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.597 [2024-11-15 11:30:29.408833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.597 [2024-11-15 11:30:29.408993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.597 [2024-11-15 11:30:29.409099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:46.598 spare 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.598 [2024-11-15 11:30:29.509329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:46.598 [2024-11-15 11:30:29.509360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:46.598 [2024-11-15 11:30:29.509652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:46.598 [2024-11-15 11:30:29.515614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:46.598 [2024-11-15 11:30:29.515639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:46.598 [2024-11-15 11:30:29.515859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.598 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.857 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.857 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.857 "name": "raid_bdev1", 00:18:46.857 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:46.857 "strip_size_kb": 64, 00:18:46.857 "state": "online", 00:18:46.857 "raid_level": "raid5f", 00:18:46.857 "superblock": true, 00:18:46.857 "num_base_bdevs": 4, 00:18:46.857 "num_base_bdevs_discovered": 4, 00:18:46.857 "num_base_bdevs_operational": 4, 00:18:46.857 "base_bdevs_list": [ 00:18:46.857 { 00:18:46.857 "name": "spare", 00:18:46.857 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:46.857 "is_configured": true, 00:18:46.857 "data_offset": 2048, 00:18:46.857 "data_size": 63488 00:18:46.857 }, 00:18:46.857 { 00:18:46.857 "name": "BaseBdev2", 00:18:46.857 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:46.857 "is_configured": true, 00:18:46.857 "data_offset": 2048, 00:18:46.857 "data_size": 63488 00:18:46.857 }, 00:18:46.857 { 00:18:46.857 "name": "BaseBdev3", 00:18:46.857 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:46.857 "is_configured": true, 00:18:46.857 "data_offset": 2048, 00:18:46.857 "data_size": 63488 00:18:46.857 }, 00:18:46.857 { 00:18:46.857 "name": "BaseBdev4", 00:18:46.857 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:46.857 "is_configured": true, 00:18:46.857 "data_offset": 2048, 00:18:46.857 "data_size": 63488 00:18:46.857 } 00:18:46.857 ] 00:18:46.857 }' 00:18:46.857 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.857 11:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.116 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.376 "name": "raid_bdev1", 00:18:47.376 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:47.376 "strip_size_kb": 64, 00:18:47.376 "state": "online", 00:18:47.376 "raid_level": "raid5f", 00:18:47.376 "superblock": true, 00:18:47.376 "num_base_bdevs": 4, 00:18:47.376 "num_base_bdevs_discovered": 4, 00:18:47.376 "num_base_bdevs_operational": 4, 00:18:47.376 "base_bdevs_list": [ 00:18:47.376 { 00:18:47.376 "name": "spare", 00:18:47.376 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:47.376 "is_configured": true, 00:18:47.376 "data_offset": 2048, 00:18:47.376 "data_size": 63488 00:18:47.376 }, 00:18:47.376 { 00:18:47.376 "name": "BaseBdev2", 00:18:47.376 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:47.376 "is_configured": true, 00:18:47.376 "data_offset": 2048, 00:18:47.376 "data_size": 63488 00:18:47.376 }, 00:18:47.376 { 00:18:47.376 "name": "BaseBdev3", 00:18:47.376 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:47.376 "is_configured": true, 00:18:47.376 "data_offset": 2048, 00:18:47.376 "data_size": 63488 00:18:47.376 }, 00:18:47.376 { 00:18:47.376 "name": "BaseBdev4", 00:18:47.376 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:47.376 "is_configured": true, 00:18:47.376 "data_offset": 2048, 00:18:47.376 "data_size": 63488 00:18:47.376 } 00:18:47.376 ] 00:18:47.376 }' 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.376 [2024-11-15 11:30:30.268405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.376 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.635 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.635 "name": "raid_bdev1", 00:18:47.635 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:47.635 "strip_size_kb": 64, 00:18:47.635 "state": "online", 00:18:47.635 "raid_level": "raid5f", 00:18:47.635 "superblock": true, 00:18:47.635 "num_base_bdevs": 4, 00:18:47.635 "num_base_bdevs_discovered": 3, 00:18:47.635 "num_base_bdevs_operational": 3, 00:18:47.635 "base_bdevs_list": [ 00:18:47.635 { 00:18:47.635 "name": null, 00:18:47.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.635 "is_configured": false, 00:18:47.635 "data_offset": 0, 00:18:47.635 "data_size": 63488 00:18:47.635 }, 00:18:47.635 { 00:18:47.635 "name": "BaseBdev2", 00:18:47.635 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:47.635 "is_configured": true, 00:18:47.635 "data_offset": 2048, 00:18:47.635 "data_size": 63488 00:18:47.635 }, 00:18:47.635 { 00:18:47.635 "name": "BaseBdev3", 00:18:47.635 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:47.635 "is_configured": true, 00:18:47.635 "data_offset": 2048, 00:18:47.635 "data_size": 63488 00:18:47.635 }, 00:18:47.635 { 00:18:47.635 "name": "BaseBdev4", 00:18:47.635 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:47.635 "is_configured": true, 00:18:47.635 "data_offset": 2048, 00:18:47.635 "data_size": 63488 00:18:47.635 } 00:18:47.635 ] 00:18:47.635 }' 00:18:47.635 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.635 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.970 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.970 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.970 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.970 [2024-11-15 11:30:30.820674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.970 [2024-11-15 11:30:30.820909] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:47.970 [2024-11-15 11:30:30.820954] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:47.970 [2024-11-15 11:30:30.821023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.970 [2024-11-15 11:30:30.835407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:47.970 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.970 11:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:47.970 [2024-11-15 11:30:30.844837] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.907 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.166 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.166 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.166 "name": "raid_bdev1", 00:18:49.166 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:49.166 "strip_size_kb": 64, 00:18:49.166 "state": "online", 00:18:49.166 "raid_level": "raid5f", 00:18:49.166 "superblock": true, 00:18:49.166 "num_base_bdevs": 4, 00:18:49.166 "num_base_bdevs_discovered": 4, 00:18:49.166 "num_base_bdevs_operational": 4, 00:18:49.166 "process": { 00:18:49.166 "type": "rebuild", 00:18:49.166 "target": "spare", 00:18:49.166 "progress": { 00:18:49.166 "blocks": 17280, 00:18:49.166 "percent": 9 00:18:49.166 } 00:18:49.166 }, 00:18:49.166 "base_bdevs_list": [ 00:18:49.166 { 00:18:49.166 "name": "spare", 00:18:49.166 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:49.166 "is_configured": true, 00:18:49.166 "data_offset": 2048, 00:18:49.166 "data_size": 63488 00:18:49.166 }, 00:18:49.166 { 00:18:49.166 "name": "BaseBdev2", 00:18:49.166 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:49.167 "is_configured": true, 00:18:49.167 "data_offset": 2048, 00:18:49.167 "data_size": 63488 00:18:49.167 }, 00:18:49.167 { 00:18:49.167 "name": "BaseBdev3", 00:18:49.167 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:49.167 "is_configured": true, 00:18:49.167 "data_offset": 2048, 00:18:49.167 "data_size": 63488 00:18:49.167 }, 00:18:49.167 { 00:18:49.167 "name": "BaseBdev4", 00:18:49.167 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:49.167 "is_configured": true, 00:18:49.167 "data_offset": 2048, 00:18:49.167 "data_size": 63488 00:18:49.167 } 00:18:49.167 ] 00:18:49.167 }' 00:18:49.167 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.167 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.167 11:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.167 [2024-11-15 11:30:32.010616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.167 [2024-11-15 11:30:32.056624] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.167 [2024-11-15 11:30:32.056743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.167 [2024-11-15 11:30:32.056805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.167 [2024-11-15 11:30:32.056824] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.167 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.426 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.426 "name": "raid_bdev1", 00:18:49.426 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:49.426 "strip_size_kb": 64, 00:18:49.426 "state": "online", 00:18:49.426 "raid_level": "raid5f", 00:18:49.426 "superblock": true, 00:18:49.426 "num_base_bdevs": 4, 00:18:49.426 "num_base_bdevs_discovered": 3, 00:18:49.426 "num_base_bdevs_operational": 3, 00:18:49.426 "base_bdevs_list": [ 00:18:49.426 { 00:18:49.426 "name": null, 00:18:49.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.426 "is_configured": false, 00:18:49.426 "data_offset": 0, 00:18:49.426 "data_size": 63488 00:18:49.426 }, 00:18:49.426 { 00:18:49.426 "name": "BaseBdev2", 00:18:49.426 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:49.426 "is_configured": true, 00:18:49.427 "data_offset": 2048, 00:18:49.427 "data_size": 63488 00:18:49.427 }, 00:18:49.427 { 00:18:49.427 "name": "BaseBdev3", 00:18:49.427 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:49.427 "is_configured": true, 00:18:49.427 "data_offset": 2048, 00:18:49.427 "data_size": 63488 00:18:49.427 }, 00:18:49.427 { 00:18:49.427 "name": "BaseBdev4", 00:18:49.427 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:49.427 "is_configured": true, 00:18:49.427 "data_offset": 2048, 00:18:49.427 "data_size": 63488 00:18:49.427 } 00:18:49.427 ] 00:18:49.427 }' 00:18:49.427 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.427 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.686 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.686 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.686 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.686 [2024-11-15 11:30:32.632122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.686 [2024-11-15 11:30:32.632227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.686 [2024-11-15 11:30:32.632268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:49.686 [2024-11-15 11:30:32.632288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.686 [2024-11-15 11:30:32.632954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.686 [2024-11-15 11:30:32.632993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.686 [2024-11-15 11:30:32.633151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.686 [2024-11-15 11:30:32.633192] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:49.686 [2024-11-15 11:30:32.633223] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:49.686 [2024-11-15 11:30:32.633276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.944 [2024-11-15 11:30:32.648643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:49.944 spare 00:18:49.944 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.944 11:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:49.944 [2024-11-15 11:30:32.657794] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.882 "name": "raid_bdev1", 00:18:50.882 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:50.882 "strip_size_kb": 64, 00:18:50.882 "state": "online", 00:18:50.882 "raid_level": "raid5f", 00:18:50.882 "superblock": true, 00:18:50.882 "num_base_bdevs": 4, 00:18:50.882 "num_base_bdevs_discovered": 4, 00:18:50.882 "num_base_bdevs_operational": 4, 00:18:50.882 "process": { 00:18:50.882 "type": "rebuild", 00:18:50.882 "target": "spare", 00:18:50.882 "progress": { 00:18:50.882 "blocks": 17280, 00:18:50.882 "percent": 9 00:18:50.882 } 00:18:50.882 }, 00:18:50.882 "base_bdevs_list": [ 00:18:50.882 { 00:18:50.882 "name": "spare", 00:18:50.882 "uuid": "6a6a9cd7-50db-55ae-9309-6c3d41e28988", 00:18:50.882 "is_configured": true, 00:18:50.882 "data_offset": 2048, 00:18:50.882 "data_size": 63488 00:18:50.882 }, 00:18:50.882 { 00:18:50.882 "name": "BaseBdev2", 00:18:50.882 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:50.882 "is_configured": true, 00:18:50.882 "data_offset": 2048, 00:18:50.882 "data_size": 63488 00:18:50.882 }, 00:18:50.882 { 00:18:50.882 "name": "BaseBdev3", 00:18:50.882 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:50.882 "is_configured": true, 00:18:50.882 "data_offset": 2048, 00:18:50.882 "data_size": 63488 00:18:50.882 }, 00:18:50.882 { 00:18:50.882 "name": "BaseBdev4", 00:18:50.882 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:50.882 "is_configured": true, 00:18:50.882 "data_offset": 2048, 00:18:50.882 "data_size": 63488 00:18:50.882 } 00:18:50.882 ] 00:18:50.882 }' 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.882 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.882 [2024-11-15 11:30:33.827417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.141 [2024-11-15 11:30:33.871391] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.141 [2024-11-15 11:30:33.871525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.141 [2024-11-15 11:30:33.871573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.141 [2024-11-15 11:30:33.871587] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.141 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.142 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.142 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.142 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.142 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.142 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.142 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.142 "name": "raid_bdev1", 00:18:51.142 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:51.142 "strip_size_kb": 64, 00:18:51.142 "state": "online", 00:18:51.142 "raid_level": "raid5f", 00:18:51.142 "superblock": true, 00:18:51.142 "num_base_bdevs": 4, 00:18:51.142 "num_base_bdevs_discovered": 3, 00:18:51.142 "num_base_bdevs_operational": 3, 00:18:51.142 "base_bdevs_list": [ 00:18:51.142 { 00:18:51.142 "name": null, 00:18:51.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.142 "is_configured": false, 00:18:51.142 "data_offset": 0, 00:18:51.142 "data_size": 63488 00:18:51.142 }, 00:18:51.142 { 00:18:51.142 "name": "BaseBdev2", 00:18:51.142 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:51.142 "is_configured": true, 00:18:51.142 "data_offset": 2048, 00:18:51.142 "data_size": 63488 00:18:51.142 }, 00:18:51.142 { 00:18:51.142 "name": "BaseBdev3", 00:18:51.142 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:51.142 "is_configured": true, 00:18:51.142 "data_offset": 2048, 00:18:51.142 "data_size": 63488 00:18:51.142 }, 00:18:51.142 { 00:18:51.142 "name": "BaseBdev4", 00:18:51.142 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:51.142 "is_configured": true, 00:18:51.142 "data_offset": 2048, 00:18:51.142 "data_size": 63488 00:18:51.142 } 00:18:51.142 ] 00:18:51.142 }' 00:18:51.142 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.142 11:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.709 "name": "raid_bdev1", 00:18:51.709 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:51.709 "strip_size_kb": 64, 00:18:51.709 "state": "online", 00:18:51.709 "raid_level": "raid5f", 00:18:51.709 "superblock": true, 00:18:51.709 "num_base_bdevs": 4, 00:18:51.709 "num_base_bdevs_discovered": 3, 00:18:51.709 "num_base_bdevs_operational": 3, 00:18:51.709 "base_bdevs_list": [ 00:18:51.709 { 00:18:51.709 "name": null, 00:18:51.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.709 "is_configured": false, 00:18:51.709 "data_offset": 0, 00:18:51.709 "data_size": 63488 00:18:51.709 }, 00:18:51.709 { 00:18:51.709 "name": "BaseBdev2", 00:18:51.709 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:51.709 "is_configured": true, 00:18:51.709 "data_offset": 2048, 00:18:51.709 "data_size": 63488 00:18:51.709 }, 00:18:51.709 { 00:18:51.709 "name": "BaseBdev3", 00:18:51.709 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:51.709 "is_configured": true, 00:18:51.709 "data_offset": 2048, 00:18:51.709 "data_size": 63488 00:18:51.709 }, 00:18:51.709 { 00:18:51.709 "name": "BaseBdev4", 00:18:51.709 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:51.709 "is_configured": true, 00:18:51.709 "data_offset": 2048, 00:18:51.709 "data_size": 63488 00:18:51.709 } 00:18:51.709 ] 00:18:51.709 }' 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.709 [2024-11-15 11:30:34.603774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.709 [2024-11-15 11:30:34.603863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.709 [2024-11-15 11:30:34.603898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:51.709 [2024-11-15 11:30:34.603912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.709 [2024-11-15 11:30:34.604584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.709 [2024-11-15 11:30:34.604614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.709 [2024-11-15 11:30:34.604751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:51.709 [2024-11-15 11:30:34.604773] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.709 [2024-11-15 11:30:34.604788] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:51.709 [2024-11-15 11:30:34.604801] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:51.709 BaseBdev1 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.709 11:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.086 "name": "raid_bdev1", 00:18:53.086 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:53.086 "strip_size_kb": 64, 00:18:53.086 "state": "online", 00:18:53.086 "raid_level": "raid5f", 00:18:53.086 "superblock": true, 00:18:53.086 "num_base_bdevs": 4, 00:18:53.086 "num_base_bdevs_discovered": 3, 00:18:53.086 "num_base_bdevs_operational": 3, 00:18:53.086 "base_bdevs_list": [ 00:18:53.086 { 00:18:53.086 "name": null, 00:18:53.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.086 "is_configured": false, 00:18:53.086 "data_offset": 0, 00:18:53.086 "data_size": 63488 00:18:53.086 }, 00:18:53.086 { 00:18:53.086 "name": "BaseBdev2", 00:18:53.086 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:53.086 "is_configured": true, 00:18:53.086 "data_offset": 2048, 00:18:53.086 "data_size": 63488 00:18:53.086 }, 00:18:53.086 { 00:18:53.086 "name": "BaseBdev3", 00:18:53.086 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:53.086 "is_configured": true, 00:18:53.086 "data_offset": 2048, 00:18:53.086 "data_size": 63488 00:18:53.086 }, 00:18:53.086 { 00:18:53.086 "name": "BaseBdev4", 00:18:53.086 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:53.086 "is_configured": true, 00:18:53.086 "data_offset": 2048, 00:18:53.086 "data_size": 63488 00:18:53.086 } 00:18:53.086 ] 00:18:53.086 }' 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.086 11:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.345 "name": "raid_bdev1", 00:18:53.345 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:53.345 "strip_size_kb": 64, 00:18:53.345 "state": "online", 00:18:53.345 "raid_level": "raid5f", 00:18:53.345 "superblock": true, 00:18:53.345 "num_base_bdevs": 4, 00:18:53.345 "num_base_bdevs_discovered": 3, 00:18:53.345 "num_base_bdevs_operational": 3, 00:18:53.345 "base_bdevs_list": [ 00:18:53.345 { 00:18:53.345 "name": null, 00:18:53.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.345 "is_configured": false, 00:18:53.345 "data_offset": 0, 00:18:53.345 "data_size": 63488 00:18:53.345 }, 00:18:53.345 { 00:18:53.345 "name": "BaseBdev2", 00:18:53.345 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:53.345 "is_configured": true, 00:18:53.345 "data_offset": 2048, 00:18:53.345 "data_size": 63488 00:18:53.345 }, 00:18:53.345 { 00:18:53.345 "name": "BaseBdev3", 00:18:53.345 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:53.345 "is_configured": true, 00:18:53.345 "data_offset": 2048, 00:18:53.345 "data_size": 63488 00:18:53.345 }, 00:18:53.345 { 00:18:53.345 "name": "BaseBdev4", 00:18:53.345 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:53.345 "is_configured": true, 00:18:53.345 "data_offset": 2048, 00:18:53.345 "data_size": 63488 00:18:53.345 } 00:18:53.345 ] 00:18:53.345 }' 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.345 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.604 [2024-11-15 11:30:36.312720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.604 [2024-11-15 11:30:36.313029] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:53.604 [2024-11-15 11:30:36.313054] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:53.604 request: 00:18:53.604 { 00:18:53.604 "base_bdev": "BaseBdev1", 00:18:53.604 "raid_bdev": "raid_bdev1", 00:18:53.604 "method": "bdev_raid_add_base_bdev", 00:18:53.604 "req_id": 1 00:18:53.604 } 00:18:53.604 Got JSON-RPC error response 00:18:53.604 response: 00:18:53.604 { 00:18:53.604 "code": -22, 00:18:53.604 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:53.604 } 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:53.604 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:53.605 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:53.605 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:53.605 11:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.541 "name": "raid_bdev1", 00:18:54.541 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:54.541 "strip_size_kb": 64, 00:18:54.541 "state": "online", 00:18:54.541 "raid_level": "raid5f", 00:18:54.541 "superblock": true, 00:18:54.541 "num_base_bdevs": 4, 00:18:54.541 "num_base_bdevs_discovered": 3, 00:18:54.541 "num_base_bdevs_operational": 3, 00:18:54.541 "base_bdevs_list": [ 00:18:54.541 { 00:18:54.541 "name": null, 00:18:54.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.541 "is_configured": false, 00:18:54.541 "data_offset": 0, 00:18:54.541 "data_size": 63488 00:18:54.541 }, 00:18:54.541 { 00:18:54.541 "name": "BaseBdev2", 00:18:54.541 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:54.541 "is_configured": true, 00:18:54.541 "data_offset": 2048, 00:18:54.541 "data_size": 63488 00:18:54.541 }, 00:18:54.541 { 00:18:54.541 "name": "BaseBdev3", 00:18:54.541 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:54.541 "is_configured": true, 00:18:54.541 "data_offset": 2048, 00:18:54.541 "data_size": 63488 00:18:54.541 }, 00:18:54.541 { 00:18:54.541 "name": "BaseBdev4", 00:18:54.541 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:54.541 "is_configured": true, 00:18:54.541 "data_offset": 2048, 00:18:54.541 "data_size": 63488 00:18:54.541 } 00:18:54.541 ] 00:18:54.541 }' 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.541 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.109 "name": "raid_bdev1", 00:18:55.109 "uuid": "8bbe538b-0167-4983-b8ea-0c0e0d9ea249", 00:18:55.109 "strip_size_kb": 64, 00:18:55.109 "state": "online", 00:18:55.109 "raid_level": "raid5f", 00:18:55.109 "superblock": true, 00:18:55.109 "num_base_bdevs": 4, 00:18:55.109 "num_base_bdevs_discovered": 3, 00:18:55.109 "num_base_bdevs_operational": 3, 00:18:55.109 "base_bdevs_list": [ 00:18:55.109 { 00:18:55.109 "name": null, 00:18:55.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.109 "is_configured": false, 00:18:55.109 "data_offset": 0, 00:18:55.109 "data_size": 63488 00:18:55.109 }, 00:18:55.109 { 00:18:55.109 "name": "BaseBdev2", 00:18:55.109 "uuid": "cbf6943a-7c8c-59a1-8348-1dfe85ea772b", 00:18:55.109 "is_configured": true, 00:18:55.109 "data_offset": 2048, 00:18:55.109 "data_size": 63488 00:18:55.109 }, 00:18:55.109 { 00:18:55.109 "name": "BaseBdev3", 00:18:55.109 "uuid": "2ace97ee-b99b-5339-b561-d908f458ff49", 00:18:55.109 "is_configured": true, 00:18:55.109 "data_offset": 2048, 00:18:55.109 "data_size": 63488 00:18:55.109 }, 00:18:55.109 { 00:18:55.109 "name": "BaseBdev4", 00:18:55.109 "uuid": "1f9dfd0b-bfa6-5f57-842e-6e0b36054204", 00:18:55.109 "is_configured": true, 00:18:55.109 "data_offset": 2048, 00:18:55.109 "data_size": 63488 00:18:55.109 } 00:18:55.109 ] 00:18:55.109 }' 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.109 11:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.109 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.109 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85391 00:18:55.109 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85391 ']' 00:18:55.109 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85391 00:18:55.109 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:55.109 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:55.109 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85391 00:18:55.367 killing process with pid 85391 00:18:55.367 Received shutdown signal, test time was about 60.000000 seconds 00:18:55.367 00:18:55.367 Latency(us) 00:18:55.367 [2024-11-15T11:30:38.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.367 [2024-11-15T11:30:38.317Z] =================================================================================================================== 00:18:55.367 [2024-11-15T11:30:38.317Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.367 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:55.367 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:55.367 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85391' 00:18:55.367 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85391 00:18:55.367 [2024-11-15 11:30:38.072422] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:55.367 11:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85391 00:18:55.367 [2024-11-15 11:30:38.072597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.367 [2024-11-15 11:30:38.072707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.367 [2024-11-15 11:30:38.072740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:55.626 [2024-11-15 11:30:38.570307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.005 11:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:57.005 00:18:57.005 real 0m28.939s 00:18:57.005 user 0m37.529s 00:18:57.005 sys 0m2.905s 00:18:57.005 11:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:57.005 11:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.005 ************************************ 00:18:57.005 END TEST raid5f_rebuild_test_sb 00:18:57.005 ************************************ 00:18:57.005 11:30:39 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:57.005 11:30:39 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:57.005 11:30:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:57.005 11:30:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:57.005 11:30:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.005 ************************************ 00:18:57.005 START TEST raid_state_function_test_sb_4k 00:18:57.005 ************************************ 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:57.005 Process raid pid: 86218 00:18:57.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86218 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86218' 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86218 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86218 ']' 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.005 11:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.264 [2024-11-15 11:30:39.994274] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:18:57.264 [2024-11-15 11:30:39.994440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.264 [2024-11-15 11:30:40.180436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.524 [2024-11-15 11:30:40.345361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.782 [2024-11-15 11:30:40.599407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.782 [2024-11-15 11:30:40.599775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.350 [2024-11-15 11:30:41.030401] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.350 [2024-11-15 11:30:41.030610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.350 [2024-11-15 11:30:41.030641] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.350 [2024-11-15 11:30:41.030660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.350 "name": "Existed_Raid", 00:18:58.350 "uuid": "749f7e57-b20d-4b3c-9646-a6dcebb8b08c", 00:18:58.350 "strip_size_kb": 0, 00:18:58.350 "state": "configuring", 00:18:58.350 "raid_level": "raid1", 00:18:58.350 "superblock": true, 00:18:58.350 "num_base_bdevs": 2, 00:18:58.350 "num_base_bdevs_discovered": 0, 00:18:58.350 "num_base_bdevs_operational": 2, 00:18:58.350 "base_bdevs_list": [ 00:18:58.350 { 00:18:58.350 "name": "BaseBdev1", 00:18:58.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.350 "is_configured": false, 00:18:58.350 "data_offset": 0, 00:18:58.350 "data_size": 0 00:18:58.350 }, 00:18:58.350 { 00:18:58.350 "name": "BaseBdev2", 00:18:58.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.350 "is_configured": false, 00:18:58.350 "data_offset": 0, 00:18:58.350 "data_size": 0 00:18:58.350 } 00:18:58.350 ] 00:18:58.350 }' 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.350 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.919 [2024-11-15 11:30:41.570487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.919 [2024-11-15 11:30:41.570534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.919 [2024-11-15 11:30:41.578462] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.919 [2024-11-15 11:30:41.578517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.919 [2024-11-15 11:30:41.578533] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.919 [2024-11-15 11:30:41.578583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.919 [2024-11-15 11:30:41.624520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.919 BaseBdev1 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.919 [ 00:18:58.919 { 00:18:58.919 "name": "BaseBdev1", 00:18:58.919 "aliases": [ 00:18:58.919 "54570d52-d4a0-425e-9442-282abff4a8d9" 00:18:58.919 ], 00:18:58.919 "product_name": "Malloc disk", 00:18:58.919 "block_size": 4096, 00:18:58.919 "num_blocks": 8192, 00:18:58.919 "uuid": "54570d52-d4a0-425e-9442-282abff4a8d9", 00:18:58.919 "assigned_rate_limits": { 00:18:58.919 "rw_ios_per_sec": 0, 00:18:58.919 "rw_mbytes_per_sec": 0, 00:18:58.919 "r_mbytes_per_sec": 0, 00:18:58.919 "w_mbytes_per_sec": 0 00:18:58.919 }, 00:18:58.919 "claimed": true, 00:18:58.919 "claim_type": "exclusive_write", 00:18:58.919 "zoned": false, 00:18:58.919 "supported_io_types": { 00:18:58.919 "read": true, 00:18:58.919 "write": true, 00:18:58.919 "unmap": true, 00:18:58.919 "flush": true, 00:18:58.919 "reset": true, 00:18:58.919 "nvme_admin": false, 00:18:58.919 "nvme_io": false, 00:18:58.919 "nvme_io_md": false, 00:18:58.919 "write_zeroes": true, 00:18:58.919 "zcopy": true, 00:18:58.919 "get_zone_info": false, 00:18:58.919 "zone_management": false, 00:18:58.919 "zone_append": false, 00:18:58.919 "compare": false, 00:18:58.919 "compare_and_write": false, 00:18:58.919 "abort": true, 00:18:58.919 "seek_hole": false, 00:18:58.919 "seek_data": false, 00:18:58.919 "copy": true, 00:18:58.919 "nvme_iov_md": false 00:18:58.919 }, 00:18:58.919 "memory_domains": [ 00:18:58.919 { 00:18:58.919 "dma_device_id": "system", 00:18:58.919 "dma_device_type": 1 00:18:58.919 }, 00:18:58.919 { 00:18:58.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.919 "dma_device_type": 2 00:18:58.919 } 00:18:58.919 ], 00:18:58.919 "driver_specific": {} 00:18:58.919 } 00:18:58.919 ] 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.919 "name": "Existed_Raid", 00:18:58.919 "uuid": "84dac8bb-e81f-4a80-8f92-1f3cd7fd9d49", 00:18:58.919 "strip_size_kb": 0, 00:18:58.919 "state": "configuring", 00:18:58.919 "raid_level": "raid1", 00:18:58.919 "superblock": true, 00:18:58.919 "num_base_bdevs": 2, 00:18:58.919 "num_base_bdevs_discovered": 1, 00:18:58.919 "num_base_bdevs_operational": 2, 00:18:58.919 "base_bdevs_list": [ 00:18:58.919 { 00:18:58.919 "name": "BaseBdev1", 00:18:58.919 "uuid": "54570d52-d4a0-425e-9442-282abff4a8d9", 00:18:58.919 "is_configured": true, 00:18:58.919 "data_offset": 256, 00:18:58.919 "data_size": 7936 00:18:58.919 }, 00:18:58.919 { 00:18:58.919 "name": "BaseBdev2", 00:18:58.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.919 "is_configured": false, 00:18:58.919 "data_offset": 0, 00:18:58.919 "data_size": 0 00:18:58.919 } 00:18:58.919 ] 00:18:58.919 }' 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.919 11:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.486 [2024-11-15 11:30:42.200893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.486 [2024-11-15 11:30:42.200955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.486 [2024-11-15 11:30:42.208814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.486 [2024-11-15 11:30:42.211600] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.486 [2024-11-15 11:30:42.211803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.486 "name": "Existed_Raid", 00:18:59.486 "uuid": "98a03f57-cf19-482c-93b5-fbfbdf0adf89", 00:18:59.486 "strip_size_kb": 0, 00:18:59.486 "state": "configuring", 00:18:59.486 "raid_level": "raid1", 00:18:59.486 "superblock": true, 00:18:59.486 "num_base_bdevs": 2, 00:18:59.486 "num_base_bdevs_discovered": 1, 00:18:59.486 "num_base_bdevs_operational": 2, 00:18:59.486 "base_bdevs_list": [ 00:18:59.486 { 00:18:59.486 "name": "BaseBdev1", 00:18:59.486 "uuid": "54570d52-d4a0-425e-9442-282abff4a8d9", 00:18:59.486 "is_configured": true, 00:18:59.486 "data_offset": 256, 00:18:59.486 "data_size": 7936 00:18:59.486 }, 00:18:59.486 { 00:18:59.486 "name": "BaseBdev2", 00:18:59.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.486 "is_configured": false, 00:18:59.486 "data_offset": 0, 00:18:59.486 "data_size": 0 00:18:59.486 } 00:18:59.486 ] 00:18:59.486 }' 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.486 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.054 [2024-11-15 11:30:42.776160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.054 [2024-11-15 11:30:42.776503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:00.054 [2024-11-15 11:30:42.776537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:00.054 [2024-11-15 11:30:42.776882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:00.054 [2024-11-15 11:30:42.777139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:00.054 [2024-11-15 11:30:42.777160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:00.054 BaseBdev2 00:19:00.054 [2024-11-15 11:30:42.777409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.054 [ 00:19:00.054 { 00:19:00.054 "name": "BaseBdev2", 00:19:00.054 "aliases": [ 00:19:00.054 "158c44a6-b3ef-4421-8651-8c0641f12319" 00:19:00.054 ], 00:19:00.054 "product_name": "Malloc disk", 00:19:00.054 "block_size": 4096, 00:19:00.054 "num_blocks": 8192, 00:19:00.054 "uuid": "158c44a6-b3ef-4421-8651-8c0641f12319", 00:19:00.054 "assigned_rate_limits": { 00:19:00.054 "rw_ios_per_sec": 0, 00:19:00.054 "rw_mbytes_per_sec": 0, 00:19:00.054 "r_mbytes_per_sec": 0, 00:19:00.054 "w_mbytes_per_sec": 0 00:19:00.054 }, 00:19:00.054 "claimed": true, 00:19:00.054 "claim_type": "exclusive_write", 00:19:00.054 "zoned": false, 00:19:00.054 "supported_io_types": { 00:19:00.054 "read": true, 00:19:00.054 "write": true, 00:19:00.054 "unmap": true, 00:19:00.054 "flush": true, 00:19:00.054 "reset": true, 00:19:00.054 "nvme_admin": false, 00:19:00.054 "nvme_io": false, 00:19:00.054 "nvme_io_md": false, 00:19:00.054 "write_zeroes": true, 00:19:00.054 "zcopy": true, 00:19:00.054 "get_zone_info": false, 00:19:00.054 "zone_management": false, 00:19:00.054 "zone_append": false, 00:19:00.054 "compare": false, 00:19:00.054 "compare_and_write": false, 00:19:00.054 "abort": true, 00:19:00.054 "seek_hole": false, 00:19:00.054 "seek_data": false, 00:19:00.054 "copy": true, 00:19:00.054 "nvme_iov_md": false 00:19:00.054 }, 00:19:00.054 "memory_domains": [ 00:19:00.054 { 00:19:00.054 "dma_device_id": "system", 00:19:00.054 "dma_device_type": 1 00:19:00.054 }, 00:19:00.054 { 00:19:00.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.054 "dma_device_type": 2 00:19:00.054 } 00:19:00.054 ], 00:19:00.054 "driver_specific": {} 00:19:00.054 } 00:19:00.054 ] 00:19:00.054 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.055 "name": "Existed_Raid", 00:19:00.055 "uuid": "98a03f57-cf19-482c-93b5-fbfbdf0adf89", 00:19:00.055 "strip_size_kb": 0, 00:19:00.055 "state": "online", 00:19:00.055 "raid_level": "raid1", 00:19:00.055 "superblock": true, 00:19:00.055 "num_base_bdevs": 2, 00:19:00.055 "num_base_bdevs_discovered": 2, 00:19:00.055 "num_base_bdevs_operational": 2, 00:19:00.055 "base_bdevs_list": [ 00:19:00.055 { 00:19:00.055 "name": "BaseBdev1", 00:19:00.055 "uuid": "54570d52-d4a0-425e-9442-282abff4a8d9", 00:19:00.055 "is_configured": true, 00:19:00.055 "data_offset": 256, 00:19:00.055 "data_size": 7936 00:19:00.055 }, 00:19:00.055 { 00:19:00.055 "name": "BaseBdev2", 00:19:00.055 "uuid": "158c44a6-b3ef-4421-8651-8c0641f12319", 00:19:00.055 "is_configured": true, 00:19:00.055 "data_offset": 256, 00:19:00.055 "data_size": 7936 00:19:00.055 } 00:19:00.055 ] 00:19:00.055 }' 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.055 11:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.624 [2024-11-15 11:30:43.340862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.624 "name": "Existed_Raid", 00:19:00.624 "aliases": [ 00:19:00.624 "98a03f57-cf19-482c-93b5-fbfbdf0adf89" 00:19:00.624 ], 00:19:00.624 "product_name": "Raid Volume", 00:19:00.624 "block_size": 4096, 00:19:00.624 "num_blocks": 7936, 00:19:00.624 "uuid": "98a03f57-cf19-482c-93b5-fbfbdf0adf89", 00:19:00.624 "assigned_rate_limits": { 00:19:00.624 "rw_ios_per_sec": 0, 00:19:00.624 "rw_mbytes_per_sec": 0, 00:19:00.624 "r_mbytes_per_sec": 0, 00:19:00.624 "w_mbytes_per_sec": 0 00:19:00.624 }, 00:19:00.624 "claimed": false, 00:19:00.624 "zoned": false, 00:19:00.624 "supported_io_types": { 00:19:00.624 "read": true, 00:19:00.624 "write": true, 00:19:00.624 "unmap": false, 00:19:00.624 "flush": false, 00:19:00.624 "reset": true, 00:19:00.624 "nvme_admin": false, 00:19:00.624 "nvme_io": false, 00:19:00.624 "nvme_io_md": false, 00:19:00.624 "write_zeroes": true, 00:19:00.624 "zcopy": false, 00:19:00.624 "get_zone_info": false, 00:19:00.624 "zone_management": false, 00:19:00.624 "zone_append": false, 00:19:00.624 "compare": false, 00:19:00.624 "compare_and_write": false, 00:19:00.624 "abort": false, 00:19:00.624 "seek_hole": false, 00:19:00.624 "seek_data": false, 00:19:00.624 "copy": false, 00:19:00.624 "nvme_iov_md": false 00:19:00.624 }, 00:19:00.624 "memory_domains": [ 00:19:00.624 { 00:19:00.624 "dma_device_id": "system", 00:19:00.624 "dma_device_type": 1 00:19:00.624 }, 00:19:00.624 { 00:19:00.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.624 "dma_device_type": 2 00:19:00.624 }, 00:19:00.624 { 00:19:00.624 "dma_device_id": "system", 00:19:00.624 "dma_device_type": 1 00:19:00.624 }, 00:19:00.624 { 00:19:00.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.624 "dma_device_type": 2 00:19:00.624 } 00:19:00.624 ], 00:19:00.624 "driver_specific": { 00:19:00.624 "raid": { 00:19:00.624 "uuid": "98a03f57-cf19-482c-93b5-fbfbdf0adf89", 00:19:00.624 "strip_size_kb": 0, 00:19:00.624 "state": "online", 00:19:00.624 "raid_level": "raid1", 00:19:00.624 "superblock": true, 00:19:00.624 "num_base_bdevs": 2, 00:19:00.624 "num_base_bdevs_discovered": 2, 00:19:00.624 "num_base_bdevs_operational": 2, 00:19:00.624 "base_bdevs_list": [ 00:19:00.624 { 00:19:00.624 "name": "BaseBdev1", 00:19:00.624 "uuid": "54570d52-d4a0-425e-9442-282abff4a8d9", 00:19:00.624 "is_configured": true, 00:19:00.624 "data_offset": 256, 00:19:00.624 "data_size": 7936 00:19:00.624 }, 00:19:00.624 { 00:19:00.624 "name": "BaseBdev2", 00:19:00.624 "uuid": "158c44a6-b3ef-4421-8651-8c0641f12319", 00:19:00.624 "is_configured": true, 00:19:00.624 "data_offset": 256, 00:19:00.624 "data_size": 7936 00:19:00.624 } 00:19:00.624 ] 00:19:00.624 } 00:19:00.624 } 00:19:00.624 }' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:00.624 BaseBdev2' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.624 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.884 [2024-11-15 11:30:43.612610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.884 "name": "Existed_Raid", 00:19:00.884 "uuid": "98a03f57-cf19-482c-93b5-fbfbdf0adf89", 00:19:00.884 "strip_size_kb": 0, 00:19:00.884 "state": "online", 00:19:00.884 "raid_level": "raid1", 00:19:00.884 "superblock": true, 00:19:00.884 "num_base_bdevs": 2, 00:19:00.884 "num_base_bdevs_discovered": 1, 00:19:00.884 "num_base_bdevs_operational": 1, 00:19:00.884 "base_bdevs_list": [ 00:19:00.884 { 00:19:00.884 "name": null, 00:19:00.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.884 "is_configured": false, 00:19:00.884 "data_offset": 0, 00:19:00.884 "data_size": 7936 00:19:00.884 }, 00:19:00.884 { 00:19:00.884 "name": "BaseBdev2", 00:19:00.884 "uuid": "158c44a6-b3ef-4421-8651-8c0641f12319", 00:19:00.884 "is_configured": true, 00:19:00.884 "data_offset": 256, 00:19:00.884 "data_size": 7936 00:19:00.884 } 00:19:00.884 ] 00:19:00.884 }' 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.884 11:30:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.452 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.452 [2024-11-15 11:30:44.293398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:01.452 [2024-11-15 11:30:44.293551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.453 [2024-11-15 11:30:44.365338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.453 [2024-11-15 11:30:44.365400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.453 [2024-11-15 11:30:44.365420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:01.453 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.453 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:01.453 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.453 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.453 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.453 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:01.453 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.453 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86218 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86218 ']' 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86218 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86218 00:19:01.712 killing process with pid 86218 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86218' 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86218 00:19:01.712 [2024-11-15 11:30:44.460874] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.712 11:30:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86218 00:19:01.712 [2024-11-15 11:30:44.476719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:02.654 11:30:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:02.654 00:19:02.654 real 0m5.603s 00:19:02.654 user 0m8.448s 00:19:02.654 sys 0m0.925s 00:19:02.654 11:30:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:02.654 ************************************ 00:19:02.654 END TEST raid_state_function_test_sb_4k 00:19:02.654 ************************************ 00:19:02.654 11:30:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.654 11:30:45 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:02.654 11:30:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:02.654 11:30:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:02.654 11:30:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:02.654 ************************************ 00:19:02.654 START TEST raid_superblock_test_4k 00:19:02.654 ************************************ 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:02.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86471 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86471 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86471 ']' 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:02.654 11:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.913 [2024-11-15 11:30:45.657267] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:02.914 [2024-11-15 11:30:45.657460] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86471 ] 00:19:02.914 [2024-11-15 11:30:45.845216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.173 [2024-11-15 11:30:45.969563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.432 [2024-11-15 11:30:46.167300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.432 [2024-11-15 11:30:46.167385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.000 malloc1 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.000 [2024-11-15 11:30:46.720475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.000 [2024-11-15 11:30:46.720754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.000 [2024-11-15 11:30:46.720831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:04.000 [2024-11-15 11:30:46.721087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.000 [2024-11-15 11:30:46.724229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.000 [2024-11-15 11:30:46.724284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.000 pt1 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.000 malloc2 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.000 [2024-11-15 11:30:46.778487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.000 [2024-11-15 11:30:46.778798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.000 [2024-11-15 11:30:46.778847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:04.000 [2024-11-15 11:30:46.778863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.000 [2024-11-15 11:30:46.781827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.000 [2024-11-15 11:30:46.782013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.000 pt2 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.000 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.000 [2024-11-15 11:30:46.790746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.001 [2024-11-15 11:30:46.793229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.001 [2024-11-15 11:30:46.793456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:04.001 [2024-11-15 11:30:46.793494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:04.001 [2024-11-15 11:30:46.793826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:04.001 [2024-11-15 11:30:46.794071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:04.001 [2024-11-15 11:30:46.794096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:04.001 [2024-11-15 11:30:46.794353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.001 "name": "raid_bdev1", 00:19:04.001 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:04.001 "strip_size_kb": 0, 00:19:04.001 "state": "online", 00:19:04.001 "raid_level": "raid1", 00:19:04.001 "superblock": true, 00:19:04.001 "num_base_bdevs": 2, 00:19:04.001 "num_base_bdevs_discovered": 2, 00:19:04.001 "num_base_bdevs_operational": 2, 00:19:04.001 "base_bdevs_list": [ 00:19:04.001 { 00:19:04.001 "name": "pt1", 00:19:04.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.001 "is_configured": true, 00:19:04.001 "data_offset": 256, 00:19:04.001 "data_size": 7936 00:19:04.001 }, 00:19:04.001 { 00:19:04.001 "name": "pt2", 00:19:04.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.001 "is_configured": true, 00:19:04.001 "data_offset": 256, 00:19:04.001 "data_size": 7936 00:19:04.001 } 00:19:04.001 ] 00:19:04.001 }' 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.001 11:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:04.571 [2024-11-15 11:30:47.323380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.571 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.571 "name": "raid_bdev1", 00:19:04.571 "aliases": [ 00:19:04.571 "37fcc8cc-f7d0-4499-b2eb-185bf1f45252" 00:19:04.571 ], 00:19:04.571 "product_name": "Raid Volume", 00:19:04.571 "block_size": 4096, 00:19:04.571 "num_blocks": 7936, 00:19:04.571 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:04.571 "assigned_rate_limits": { 00:19:04.571 "rw_ios_per_sec": 0, 00:19:04.571 "rw_mbytes_per_sec": 0, 00:19:04.571 "r_mbytes_per_sec": 0, 00:19:04.571 "w_mbytes_per_sec": 0 00:19:04.571 }, 00:19:04.571 "claimed": false, 00:19:04.571 "zoned": false, 00:19:04.571 "supported_io_types": { 00:19:04.571 "read": true, 00:19:04.571 "write": true, 00:19:04.571 "unmap": false, 00:19:04.571 "flush": false, 00:19:04.571 "reset": true, 00:19:04.571 "nvme_admin": false, 00:19:04.571 "nvme_io": false, 00:19:04.571 "nvme_io_md": false, 00:19:04.571 "write_zeroes": true, 00:19:04.571 "zcopy": false, 00:19:04.571 "get_zone_info": false, 00:19:04.571 "zone_management": false, 00:19:04.571 "zone_append": false, 00:19:04.571 "compare": false, 00:19:04.571 "compare_and_write": false, 00:19:04.571 "abort": false, 00:19:04.571 "seek_hole": false, 00:19:04.571 "seek_data": false, 00:19:04.571 "copy": false, 00:19:04.571 "nvme_iov_md": false 00:19:04.571 }, 00:19:04.571 "memory_domains": [ 00:19:04.571 { 00:19:04.571 "dma_device_id": "system", 00:19:04.571 "dma_device_type": 1 00:19:04.571 }, 00:19:04.571 { 00:19:04.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.571 "dma_device_type": 2 00:19:04.571 }, 00:19:04.571 { 00:19:04.571 "dma_device_id": "system", 00:19:04.571 "dma_device_type": 1 00:19:04.571 }, 00:19:04.571 { 00:19:04.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.571 "dma_device_type": 2 00:19:04.571 } 00:19:04.571 ], 00:19:04.571 "driver_specific": { 00:19:04.571 "raid": { 00:19:04.572 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:04.572 "strip_size_kb": 0, 00:19:04.572 "state": "online", 00:19:04.572 "raid_level": "raid1", 00:19:04.572 "superblock": true, 00:19:04.572 "num_base_bdevs": 2, 00:19:04.572 "num_base_bdevs_discovered": 2, 00:19:04.572 "num_base_bdevs_operational": 2, 00:19:04.572 "base_bdevs_list": [ 00:19:04.572 { 00:19:04.572 "name": "pt1", 00:19:04.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.572 "is_configured": true, 00:19:04.572 "data_offset": 256, 00:19:04.572 "data_size": 7936 00:19:04.572 }, 00:19:04.572 { 00:19:04.572 "name": "pt2", 00:19:04.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.572 "is_configured": true, 00:19:04.572 "data_offset": 256, 00:19:04.572 "data_size": 7936 00:19:04.572 } 00:19:04.572 ] 00:19:04.572 } 00:19:04.572 } 00:19:04.572 }' 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:04.572 pt2' 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.572 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.835 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:04.836 [2024-11-15 11:30:47.591260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=37fcc8cc-f7d0-4499-b2eb-185bf1f45252 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 37fcc8cc-f7d0-4499-b2eb-185bf1f45252 ']' 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.836 [2024-11-15 11:30:47.638898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.836 [2024-11-15 11:30:47.639057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.836 [2024-11-15 11:30:47.639332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.836 [2024-11-15 11:30:47.639528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.836 [2024-11-15 11:30:47.639681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.836 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.836 [2024-11-15 11:30:47.782964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:05.096 [2024-11-15 11:30:47.785954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:05.096 [2024-11-15 11:30:47.786045] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:05.096 [2024-11-15 11:30:47.786117] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:05.096 [2024-11-15 11:30:47.786141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.096 [2024-11-15 11:30:47.786316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:05.096 request: 00:19:05.096 { 00:19:05.096 "name": "raid_bdev1", 00:19:05.096 "raid_level": "raid1", 00:19:05.096 "base_bdevs": [ 00:19:05.096 "malloc1", 00:19:05.096 "malloc2" 00:19:05.096 ], 00:19:05.096 "superblock": false, 00:19:05.096 "method": "bdev_raid_create", 00:19:05.096 "req_id": 1 00:19:05.096 } 00:19:05.096 Got JSON-RPC error response 00:19:05.096 response: 00:19:05.096 { 00:19:05.096 "code": -17, 00:19:05.096 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:05.096 } 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.096 [2024-11-15 11:30:47.847009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:05.096 [2024-11-15 11:30:47.847259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.096 [2024-11-15 11:30:47.847299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:05.096 [2024-11-15 11:30:47.847319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.096 [2024-11-15 11:30:47.850419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.096 [2024-11-15 11:30:47.850485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:05.096 [2024-11-15 11:30:47.850620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:05.096 [2024-11-15 11:30:47.850692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.096 pt1 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.096 "name": "raid_bdev1", 00:19:05.096 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:05.096 "strip_size_kb": 0, 00:19:05.096 "state": "configuring", 00:19:05.096 "raid_level": "raid1", 00:19:05.096 "superblock": true, 00:19:05.096 "num_base_bdevs": 2, 00:19:05.096 "num_base_bdevs_discovered": 1, 00:19:05.096 "num_base_bdevs_operational": 2, 00:19:05.096 "base_bdevs_list": [ 00:19:05.096 { 00:19:05.096 "name": "pt1", 00:19:05.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.096 "is_configured": true, 00:19:05.096 "data_offset": 256, 00:19:05.096 "data_size": 7936 00:19:05.096 }, 00:19:05.096 { 00:19:05.096 "name": null, 00:19:05.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.096 "is_configured": false, 00:19:05.096 "data_offset": 256, 00:19:05.096 "data_size": 7936 00:19:05.096 } 00:19:05.096 ] 00:19:05.096 }' 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.096 11:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.664 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:05.664 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.665 [2024-11-15 11:30:48.371249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.665 [2024-11-15 11:30:48.371387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.665 [2024-11-15 11:30:48.371421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:05.665 [2024-11-15 11:30:48.371441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.665 [2024-11-15 11:30:48.372107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.665 [2024-11-15 11:30:48.372142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.665 [2024-11-15 11:30:48.372294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.665 [2024-11-15 11:30:48.372350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.665 [2024-11-15 11:30:48.372509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:05.665 [2024-11-15 11:30:48.372538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:05.665 [2024-11-15 11:30:48.372878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:05.665 [2024-11-15 11:30:48.373096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:05.665 [2024-11-15 11:30:48.373119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:05.665 [2024-11-15 11:30:48.373342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.665 pt2 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.665 "name": "raid_bdev1", 00:19:05.665 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:05.665 "strip_size_kb": 0, 00:19:05.665 "state": "online", 00:19:05.665 "raid_level": "raid1", 00:19:05.665 "superblock": true, 00:19:05.665 "num_base_bdevs": 2, 00:19:05.665 "num_base_bdevs_discovered": 2, 00:19:05.665 "num_base_bdevs_operational": 2, 00:19:05.665 "base_bdevs_list": [ 00:19:05.665 { 00:19:05.665 "name": "pt1", 00:19:05.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.665 "is_configured": true, 00:19:05.665 "data_offset": 256, 00:19:05.665 "data_size": 7936 00:19:05.665 }, 00:19:05.665 { 00:19:05.665 "name": "pt2", 00:19:05.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.665 "is_configured": true, 00:19:05.665 "data_offset": 256, 00:19:05.665 "data_size": 7936 00:19:05.665 } 00:19:05.665 ] 00:19:05.665 }' 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.665 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.233 [2024-11-15 11:30:48.907742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.233 "name": "raid_bdev1", 00:19:06.233 "aliases": [ 00:19:06.233 "37fcc8cc-f7d0-4499-b2eb-185bf1f45252" 00:19:06.233 ], 00:19:06.233 "product_name": "Raid Volume", 00:19:06.233 "block_size": 4096, 00:19:06.233 "num_blocks": 7936, 00:19:06.233 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:06.233 "assigned_rate_limits": { 00:19:06.233 "rw_ios_per_sec": 0, 00:19:06.233 "rw_mbytes_per_sec": 0, 00:19:06.233 "r_mbytes_per_sec": 0, 00:19:06.233 "w_mbytes_per_sec": 0 00:19:06.233 }, 00:19:06.233 "claimed": false, 00:19:06.233 "zoned": false, 00:19:06.233 "supported_io_types": { 00:19:06.233 "read": true, 00:19:06.233 "write": true, 00:19:06.233 "unmap": false, 00:19:06.233 "flush": false, 00:19:06.233 "reset": true, 00:19:06.233 "nvme_admin": false, 00:19:06.233 "nvme_io": false, 00:19:06.233 "nvme_io_md": false, 00:19:06.233 "write_zeroes": true, 00:19:06.233 "zcopy": false, 00:19:06.233 "get_zone_info": false, 00:19:06.233 "zone_management": false, 00:19:06.233 "zone_append": false, 00:19:06.233 "compare": false, 00:19:06.233 "compare_and_write": false, 00:19:06.233 "abort": false, 00:19:06.233 "seek_hole": false, 00:19:06.233 "seek_data": false, 00:19:06.233 "copy": false, 00:19:06.233 "nvme_iov_md": false 00:19:06.233 }, 00:19:06.233 "memory_domains": [ 00:19:06.233 { 00:19:06.233 "dma_device_id": "system", 00:19:06.233 "dma_device_type": 1 00:19:06.233 }, 00:19:06.233 { 00:19:06.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.233 "dma_device_type": 2 00:19:06.233 }, 00:19:06.233 { 00:19:06.233 "dma_device_id": "system", 00:19:06.233 "dma_device_type": 1 00:19:06.233 }, 00:19:06.233 { 00:19:06.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.233 "dma_device_type": 2 00:19:06.233 } 00:19:06.233 ], 00:19:06.233 "driver_specific": { 00:19:06.233 "raid": { 00:19:06.233 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:06.233 "strip_size_kb": 0, 00:19:06.233 "state": "online", 00:19:06.233 "raid_level": "raid1", 00:19:06.233 "superblock": true, 00:19:06.233 "num_base_bdevs": 2, 00:19:06.233 "num_base_bdevs_discovered": 2, 00:19:06.233 "num_base_bdevs_operational": 2, 00:19:06.233 "base_bdevs_list": [ 00:19:06.233 { 00:19:06.233 "name": "pt1", 00:19:06.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.233 "is_configured": true, 00:19:06.233 "data_offset": 256, 00:19:06.233 "data_size": 7936 00:19:06.233 }, 00:19:06.233 { 00:19:06.233 "name": "pt2", 00:19:06.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.233 "is_configured": true, 00:19:06.233 "data_offset": 256, 00:19:06.233 "data_size": 7936 00:19:06.233 } 00:19:06.233 ] 00:19:06.233 } 00:19:06.233 } 00:19:06.233 }' 00:19:06.233 11:30:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.233 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:06.234 pt2' 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.234 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.493 [2024-11-15 11:30:49.183759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 37fcc8cc-f7d0-4499-b2eb-185bf1f45252 '!=' 37fcc8cc-f7d0-4499-b2eb-185bf1f45252 ']' 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.493 [2024-11-15 11:30:49.231479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.493 "name": "raid_bdev1", 00:19:06.493 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:06.493 "strip_size_kb": 0, 00:19:06.493 "state": "online", 00:19:06.493 "raid_level": "raid1", 00:19:06.493 "superblock": true, 00:19:06.493 "num_base_bdevs": 2, 00:19:06.493 "num_base_bdevs_discovered": 1, 00:19:06.493 "num_base_bdevs_operational": 1, 00:19:06.493 "base_bdevs_list": [ 00:19:06.493 { 00:19:06.493 "name": null, 00:19:06.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.493 "is_configured": false, 00:19:06.493 "data_offset": 0, 00:19:06.493 "data_size": 7936 00:19:06.493 }, 00:19:06.493 { 00:19:06.493 "name": "pt2", 00:19:06.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.493 "is_configured": true, 00:19:06.493 "data_offset": 256, 00:19:06.493 "data_size": 7936 00:19:06.493 } 00:19:06.493 ] 00:19:06.493 }' 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.493 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.061 [2024-11-15 11:30:49.767625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.061 [2024-11-15 11:30:49.767889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.061 [2024-11-15 11:30:49.768103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.061 [2024-11-15 11:30:49.768214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.061 [2024-11-15 11:30:49.768239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.061 [2024-11-15 11:30:49.843653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.061 [2024-11-15 11:30:49.843730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.061 [2024-11-15 11:30:49.843755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:07.061 [2024-11-15 11:30:49.843771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.061 [2024-11-15 11:30:49.846934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.061 [2024-11-15 11:30:49.847150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.061 [2024-11-15 11:30:49.847299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:07.061 [2024-11-15 11:30:49.847368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.061 [2024-11-15 11:30:49.847505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:07.061 [2024-11-15 11:30:49.847529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:07.061 [2024-11-15 11:30:49.847869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:07.061 [2024-11-15 11:30:49.848058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:07.061 [2024-11-15 11:30:49.848074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:07.061 pt2 00:19:07.061 [2024-11-15 11:30:49.848331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.061 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.062 "name": "raid_bdev1", 00:19:07.062 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:07.062 "strip_size_kb": 0, 00:19:07.062 "state": "online", 00:19:07.062 "raid_level": "raid1", 00:19:07.062 "superblock": true, 00:19:07.062 "num_base_bdevs": 2, 00:19:07.062 "num_base_bdevs_discovered": 1, 00:19:07.062 "num_base_bdevs_operational": 1, 00:19:07.062 "base_bdevs_list": [ 00:19:07.062 { 00:19:07.062 "name": null, 00:19:07.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.062 "is_configured": false, 00:19:07.062 "data_offset": 256, 00:19:07.062 "data_size": 7936 00:19:07.062 }, 00:19:07.062 { 00:19:07.062 "name": "pt2", 00:19:07.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.062 "is_configured": true, 00:19:07.062 "data_offset": 256, 00:19:07.062 "data_size": 7936 00:19:07.062 } 00:19:07.062 ] 00:19:07.062 }' 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.062 11:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.629 [2024-11-15 11:30:50.395795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.629 [2024-11-15 11:30:50.395832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.629 [2024-11-15 11:30:50.395924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.629 [2024-11-15 11:30:50.395991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.629 [2024-11-15 11:30:50.396005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:07.629 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.630 [2024-11-15 11:30:50.459897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:07.630 [2024-11-15 11:30:50.459994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.630 [2024-11-15 11:30:50.460024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:07.630 [2024-11-15 11:30:50.460039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.630 [2024-11-15 11:30:50.463538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.630 [2024-11-15 11:30:50.463596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:07.630 [2024-11-15 11:30:50.463733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:07.630 [2024-11-15 11:30:50.463822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:07.630 [2024-11-15 11:30:50.464060] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:07.630 [2024-11-15 11:30:50.464079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.630 [2024-11-15 11:30:50.464124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:07.630 [2024-11-15 11:30:50.464221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.630 [2024-11-15 11:30:50.464386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:07.630 [2024-11-15 11:30:50.464403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:07.630 pt1 00:19:07.630 [2024-11-15 11:30:50.464729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:07.630 [2024-11-15 11:30:50.464930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:07.630 [2024-11-15 11:30:50.464983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:07.630 [2024-11-15 11:30:50.465193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.630 "name": "raid_bdev1", 00:19:07.630 "uuid": "37fcc8cc-f7d0-4499-b2eb-185bf1f45252", 00:19:07.630 "strip_size_kb": 0, 00:19:07.630 "state": "online", 00:19:07.630 "raid_level": "raid1", 00:19:07.630 "superblock": true, 00:19:07.630 "num_base_bdevs": 2, 00:19:07.630 "num_base_bdevs_discovered": 1, 00:19:07.630 "num_base_bdevs_operational": 1, 00:19:07.630 "base_bdevs_list": [ 00:19:07.630 { 00:19:07.630 "name": null, 00:19:07.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.630 "is_configured": false, 00:19:07.630 "data_offset": 256, 00:19:07.630 "data_size": 7936 00:19:07.630 }, 00:19:07.630 { 00:19:07.630 "name": "pt2", 00:19:07.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.630 "is_configured": true, 00:19:07.630 "data_offset": 256, 00:19:07.630 "data_size": 7936 00:19:07.630 } 00:19:07.630 ] 00:19:07.630 }' 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.630 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.198 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:08.198 11:30:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:08.198 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.198 11:30:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.198 [2024-11-15 11:30:51.052708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 37fcc8cc-f7d0-4499-b2eb-185bf1f45252 '!=' 37fcc8cc-f7d0-4499-b2eb-185bf1f45252 ']' 00:19:08.198 11:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86471 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86471 ']' 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86471 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86471 00:19:08.199 killing process with pid 86471 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86471' 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86471 00:19:08.199 [2024-11-15 11:30:51.132576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.199 11:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86471 00:19:08.199 [2024-11-15 11:30:51.132711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.199 [2024-11-15 11:30:51.132775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.199 [2024-11-15 11:30:51.132801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:08.458 [2024-11-15 11:30:51.340823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.837 11:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:09.837 00:19:09.837 real 0m7.046s 00:19:09.837 user 0m10.986s 00:19:09.837 sys 0m1.055s 00:19:09.837 11:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:09.837 11:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.837 ************************************ 00:19:09.837 END TEST raid_superblock_test_4k 00:19:09.837 ************************************ 00:19:09.837 11:30:52 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:09.837 11:30:52 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:09.837 11:30:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:09.837 11:30:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:09.837 11:30:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.837 ************************************ 00:19:09.838 START TEST raid_rebuild_test_sb_4k 00:19:09.838 ************************************ 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86805 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86805 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86805 ']' 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:09.838 11:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.838 [2024-11-15 11:30:52.770499] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:09.838 [2024-11-15 11:30:52.770979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86805 ] 00:19:09.838 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:09.838 Zero copy mechanism will not be used. 00:19:10.097 [2024-11-15 11:30:52.952855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.357 [2024-11-15 11:30:53.111347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.616 [2024-11-15 11:30:53.361179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.616 [2024-11-15 11:30:53.361535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.875 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.875 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:19:10.875 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:10.875 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:10.875 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.875 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 BaseBdev1_malloc 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 [2024-11-15 11:30:53.862399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:11.135 [2024-11-15 11:30:53.862719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.135 [2024-11-15 11:30:53.862762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:11.135 [2024-11-15 11:30:53.862782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.135 [2024-11-15 11:30:53.865953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.135 BaseBdev1 00:19:11.135 [2024-11-15 11:30:53.866186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 BaseBdev2_malloc 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 [2024-11-15 11:30:53.927885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:11.135 [2024-11-15 11:30:53.928009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.135 [2024-11-15 11:30:53.928058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:11.135 [2024-11-15 11:30:53.928075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.135 [2024-11-15 11:30:53.931463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.135 [2024-11-15 11:30:53.931512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.135 BaseBdev2 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 spare_malloc 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 11:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 spare_delay 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 [2024-11-15 11:30:54.007711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.135 [2024-11-15 11:30:54.007807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.135 [2024-11-15 11:30:54.007837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:11.135 [2024-11-15 11:30:54.007854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.135 [2024-11-15 11:30:54.011322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.135 spare 00:19:11.135 [2024-11-15 11:30:54.011573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 [2024-11-15 11:30:54.019867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.135 [2024-11-15 11:30:54.023097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.135 [2024-11-15 11:30:54.023558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:11.135 [2024-11-15 11:30:54.023593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:11.135 [2024-11-15 11:30:54.024003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:11.135 [2024-11-15 11:30:54.024277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:11.135 [2024-11-15 11:30:54.024307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:11.135 [2024-11-15 11:30:54.024584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.135 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.136 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.394 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.394 "name": "raid_bdev1", 00:19:11.394 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:11.394 "strip_size_kb": 0, 00:19:11.394 "state": "online", 00:19:11.394 "raid_level": "raid1", 00:19:11.394 "superblock": true, 00:19:11.394 "num_base_bdevs": 2, 00:19:11.394 "num_base_bdevs_discovered": 2, 00:19:11.394 "num_base_bdevs_operational": 2, 00:19:11.394 "base_bdevs_list": [ 00:19:11.394 { 00:19:11.394 "name": "BaseBdev1", 00:19:11.394 "uuid": "959fce10-1143-5470-b226-bf1124968259", 00:19:11.394 "is_configured": true, 00:19:11.394 "data_offset": 256, 00:19:11.394 "data_size": 7936 00:19:11.394 }, 00:19:11.394 { 00:19:11.394 "name": "BaseBdev2", 00:19:11.394 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:11.394 "is_configured": true, 00:19:11.394 "data_offset": 256, 00:19:11.394 "data_size": 7936 00:19:11.394 } 00:19:11.394 ] 00:19:11.394 }' 00:19:11.394 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.394 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.653 [2024-11-15 11:30:54.549237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.653 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.912 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:11.912 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:11.913 11:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:12.172 [2024-11-15 11:30:54.969114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:12.172 /dev/nbd0 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:12.172 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:12.173 1+0 records in 00:19:12.173 1+0 records out 00:19:12.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628763 s, 6.5 MB/s 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:12.173 11:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:13.111 7936+0 records in 00:19:13.111 7936+0 records out 00:19:13.111 32505856 bytes (33 MB, 31 MiB) copied, 0.974308 s, 33.4 MB/s 00:19:13.111 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:13.111 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.111 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:13.111 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:13.111 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:13.111 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.111 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:13.369 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:13.628 [2024-11-15 11:30:56.318253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.628 [2024-11-15 11:30:56.330406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.628 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.629 "name": "raid_bdev1", 00:19:13.629 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:13.629 "strip_size_kb": 0, 00:19:13.629 "state": "online", 00:19:13.629 "raid_level": "raid1", 00:19:13.629 "superblock": true, 00:19:13.629 "num_base_bdevs": 2, 00:19:13.629 "num_base_bdevs_discovered": 1, 00:19:13.629 "num_base_bdevs_operational": 1, 00:19:13.629 "base_bdevs_list": [ 00:19:13.629 { 00:19:13.629 "name": null, 00:19:13.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.629 "is_configured": false, 00:19:13.629 "data_offset": 0, 00:19:13.629 "data_size": 7936 00:19:13.629 }, 00:19:13.629 { 00:19:13.629 "name": "BaseBdev2", 00:19:13.629 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:13.629 "is_configured": true, 00:19:13.629 "data_offset": 256, 00:19:13.629 "data_size": 7936 00:19:13.629 } 00:19:13.629 ] 00:19:13.629 }' 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.629 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.198 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:14.198 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.198 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.198 [2024-11-15 11:30:56.870572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.198 [2024-11-15 11:30:56.889861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:14.198 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.198 11:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:14.198 [2024-11-15 11:30:56.892980] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.136 "name": "raid_bdev1", 00:19:15.136 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:15.136 "strip_size_kb": 0, 00:19:15.136 "state": "online", 00:19:15.136 "raid_level": "raid1", 00:19:15.136 "superblock": true, 00:19:15.136 "num_base_bdevs": 2, 00:19:15.136 "num_base_bdevs_discovered": 2, 00:19:15.136 "num_base_bdevs_operational": 2, 00:19:15.136 "process": { 00:19:15.136 "type": "rebuild", 00:19:15.136 "target": "spare", 00:19:15.136 "progress": { 00:19:15.136 "blocks": 2560, 00:19:15.136 "percent": 32 00:19:15.136 } 00:19:15.136 }, 00:19:15.136 "base_bdevs_list": [ 00:19:15.136 { 00:19:15.136 "name": "spare", 00:19:15.136 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:15.136 "is_configured": true, 00:19:15.136 "data_offset": 256, 00:19:15.136 "data_size": 7936 00:19:15.136 }, 00:19:15.136 { 00:19:15.136 "name": "BaseBdev2", 00:19:15.136 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:15.136 "is_configured": true, 00:19:15.136 "data_offset": 256, 00:19:15.136 "data_size": 7936 00:19:15.136 } 00:19:15.136 ] 00:19:15.136 }' 00:19:15.136 11:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.136 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.136 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.136 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.136 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:15.136 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.136 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.136 [2024-11-15 11:30:58.067381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.396 [2024-11-15 11:30:58.105165] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:15.396 [2024-11-15 11:30:58.105305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.396 [2024-11-15 11:30:58.105329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.396 [2024-11-15 11:30:58.105345] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.396 "name": "raid_bdev1", 00:19:15.396 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:15.396 "strip_size_kb": 0, 00:19:15.396 "state": "online", 00:19:15.396 "raid_level": "raid1", 00:19:15.396 "superblock": true, 00:19:15.396 "num_base_bdevs": 2, 00:19:15.396 "num_base_bdevs_discovered": 1, 00:19:15.396 "num_base_bdevs_operational": 1, 00:19:15.396 "base_bdevs_list": [ 00:19:15.396 { 00:19:15.396 "name": null, 00:19:15.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.396 "is_configured": false, 00:19:15.396 "data_offset": 0, 00:19:15.396 "data_size": 7936 00:19:15.396 }, 00:19:15.396 { 00:19:15.396 "name": "BaseBdev2", 00:19:15.396 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:15.396 "is_configured": true, 00:19:15.396 "data_offset": 256, 00:19:15.396 "data_size": 7936 00:19:15.396 } 00:19:15.396 ] 00:19:15.396 }' 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.396 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.965 "name": "raid_bdev1", 00:19:15.965 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:15.965 "strip_size_kb": 0, 00:19:15.965 "state": "online", 00:19:15.965 "raid_level": "raid1", 00:19:15.965 "superblock": true, 00:19:15.965 "num_base_bdevs": 2, 00:19:15.965 "num_base_bdevs_discovered": 1, 00:19:15.965 "num_base_bdevs_operational": 1, 00:19:15.965 "base_bdevs_list": [ 00:19:15.965 { 00:19:15.965 "name": null, 00:19:15.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.965 "is_configured": false, 00:19:15.965 "data_offset": 0, 00:19:15.965 "data_size": 7936 00:19:15.965 }, 00:19:15.965 { 00:19:15.965 "name": "BaseBdev2", 00:19:15.965 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:15.965 "is_configured": true, 00:19:15.965 "data_offset": 256, 00:19:15.965 "data_size": 7936 00:19:15.965 } 00:19:15.965 ] 00:19:15.965 }' 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.965 [2024-11-15 11:30:58.861901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.965 [2024-11-15 11:30:58.880991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.965 11:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:15.965 [2024-11-15 11:30:58.884188] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.345 "name": "raid_bdev1", 00:19:17.345 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:17.345 "strip_size_kb": 0, 00:19:17.345 "state": "online", 00:19:17.345 "raid_level": "raid1", 00:19:17.345 "superblock": true, 00:19:17.345 "num_base_bdevs": 2, 00:19:17.345 "num_base_bdevs_discovered": 2, 00:19:17.345 "num_base_bdevs_operational": 2, 00:19:17.345 "process": { 00:19:17.345 "type": "rebuild", 00:19:17.345 "target": "spare", 00:19:17.345 "progress": { 00:19:17.345 "blocks": 2560, 00:19:17.345 "percent": 32 00:19:17.345 } 00:19:17.345 }, 00:19:17.345 "base_bdevs_list": [ 00:19:17.345 { 00:19:17.345 "name": "spare", 00:19:17.345 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:17.345 "is_configured": true, 00:19:17.345 "data_offset": 256, 00:19:17.345 "data_size": 7936 00:19:17.345 }, 00:19:17.345 { 00:19:17.345 "name": "BaseBdev2", 00:19:17.345 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:17.345 "is_configured": true, 00:19:17.345 "data_offset": 256, 00:19:17.345 "data_size": 7936 00:19:17.345 } 00:19:17.345 ] 00:19:17.345 }' 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.345 11:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:17.345 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=737 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.345 "name": "raid_bdev1", 00:19:17.345 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:17.345 "strip_size_kb": 0, 00:19:17.345 "state": "online", 00:19:17.345 "raid_level": "raid1", 00:19:17.345 "superblock": true, 00:19:17.345 "num_base_bdevs": 2, 00:19:17.345 "num_base_bdevs_discovered": 2, 00:19:17.345 "num_base_bdevs_operational": 2, 00:19:17.345 "process": { 00:19:17.345 "type": "rebuild", 00:19:17.345 "target": "spare", 00:19:17.345 "progress": { 00:19:17.345 "blocks": 2816, 00:19:17.345 "percent": 35 00:19:17.345 } 00:19:17.345 }, 00:19:17.345 "base_bdevs_list": [ 00:19:17.345 { 00:19:17.345 "name": "spare", 00:19:17.345 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:17.345 "is_configured": true, 00:19:17.345 "data_offset": 256, 00:19:17.345 "data_size": 7936 00:19:17.345 }, 00:19:17.345 { 00:19:17.345 "name": "BaseBdev2", 00:19:17.345 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:17.345 "is_configured": true, 00:19:17.345 "data_offset": 256, 00:19:17.345 "data_size": 7936 00:19:17.345 } 00:19:17.345 ] 00:19:17.345 }' 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.345 11:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.282 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.542 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.542 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.542 "name": "raid_bdev1", 00:19:18.542 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:18.542 "strip_size_kb": 0, 00:19:18.542 "state": "online", 00:19:18.542 "raid_level": "raid1", 00:19:18.542 "superblock": true, 00:19:18.542 "num_base_bdevs": 2, 00:19:18.542 "num_base_bdevs_discovered": 2, 00:19:18.542 "num_base_bdevs_operational": 2, 00:19:18.542 "process": { 00:19:18.542 "type": "rebuild", 00:19:18.542 "target": "spare", 00:19:18.542 "progress": { 00:19:18.542 "blocks": 5888, 00:19:18.542 "percent": 74 00:19:18.542 } 00:19:18.542 }, 00:19:18.542 "base_bdevs_list": [ 00:19:18.542 { 00:19:18.542 "name": "spare", 00:19:18.542 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:18.542 "is_configured": true, 00:19:18.542 "data_offset": 256, 00:19:18.542 "data_size": 7936 00:19:18.542 }, 00:19:18.542 { 00:19:18.542 "name": "BaseBdev2", 00:19:18.542 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:18.542 "is_configured": true, 00:19:18.542 "data_offset": 256, 00:19:18.542 "data_size": 7936 00:19:18.542 } 00:19:18.542 ] 00:19:18.542 }' 00:19:18.542 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.542 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:18.542 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.542 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.542 11:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:19.110 [2024-11-15 11:31:02.012562] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:19.110 [2024-11-15 11:31:02.012641] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:19.110 [2024-11-15 11:31:02.012785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.678 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.678 "name": "raid_bdev1", 00:19:19.678 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:19.678 "strip_size_kb": 0, 00:19:19.678 "state": "online", 00:19:19.678 "raid_level": "raid1", 00:19:19.678 "superblock": true, 00:19:19.678 "num_base_bdevs": 2, 00:19:19.678 "num_base_bdevs_discovered": 2, 00:19:19.678 "num_base_bdevs_operational": 2, 00:19:19.678 "base_bdevs_list": [ 00:19:19.678 { 00:19:19.679 "name": "spare", 00:19:19.679 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:19.679 "is_configured": true, 00:19:19.679 "data_offset": 256, 00:19:19.679 "data_size": 7936 00:19:19.679 }, 00:19:19.679 { 00:19:19.679 "name": "BaseBdev2", 00:19:19.679 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:19.679 "is_configured": true, 00:19:19.679 "data_offset": 256, 00:19:19.679 "data_size": 7936 00:19:19.679 } 00:19:19.679 ] 00:19:19.679 }' 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.679 "name": "raid_bdev1", 00:19:19.679 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:19.679 "strip_size_kb": 0, 00:19:19.679 "state": "online", 00:19:19.679 "raid_level": "raid1", 00:19:19.679 "superblock": true, 00:19:19.679 "num_base_bdevs": 2, 00:19:19.679 "num_base_bdevs_discovered": 2, 00:19:19.679 "num_base_bdevs_operational": 2, 00:19:19.679 "base_bdevs_list": [ 00:19:19.679 { 00:19:19.679 "name": "spare", 00:19:19.679 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:19.679 "is_configured": true, 00:19:19.679 "data_offset": 256, 00:19:19.679 "data_size": 7936 00:19:19.679 }, 00:19:19.679 { 00:19:19.679 "name": "BaseBdev2", 00:19:19.679 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:19.679 "is_configured": true, 00:19:19.679 "data_offset": 256, 00:19:19.679 "data_size": 7936 00:19:19.679 } 00:19:19.679 ] 00:19:19.679 }' 00:19:19.679 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.938 "name": "raid_bdev1", 00:19:19.938 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:19.938 "strip_size_kb": 0, 00:19:19.938 "state": "online", 00:19:19.938 "raid_level": "raid1", 00:19:19.938 "superblock": true, 00:19:19.938 "num_base_bdevs": 2, 00:19:19.938 "num_base_bdevs_discovered": 2, 00:19:19.938 "num_base_bdevs_operational": 2, 00:19:19.938 "base_bdevs_list": [ 00:19:19.938 { 00:19:19.938 "name": "spare", 00:19:19.938 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:19.938 "is_configured": true, 00:19:19.938 "data_offset": 256, 00:19:19.938 "data_size": 7936 00:19:19.938 }, 00:19:19.938 { 00:19:19.938 "name": "BaseBdev2", 00:19:19.938 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:19.938 "is_configured": true, 00:19:19.938 "data_offset": 256, 00:19:19.938 "data_size": 7936 00:19:19.938 } 00:19:19.938 ] 00:19:19.938 }' 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.938 11:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.506 [2024-11-15 11:31:03.202433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.506 [2024-11-15 11:31:03.202728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.506 [2024-11-15 11:31:03.202881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.506 [2024-11-15 11:31:03.202987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.506 [2024-11-15 11:31:03.203007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.506 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:20.765 /dev/nbd0 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.765 1+0 records in 00:19:20.765 1+0 records out 00:19:20.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375478 s, 10.9 MB/s 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.765 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:21.023 /dev/nbd1 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.023 1+0 records in 00:19:21.023 1+0 records out 00:19:21.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273221 s, 15.0 MB/s 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:21.023 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:21.024 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.024 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:21.024 11:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:21.282 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:21.282 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:21.282 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:21.282 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:21.282 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:21.282 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:21.282 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:21.541 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.800 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.800 [2024-11-15 11:31:04.725065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.800 [2024-11-15 11:31:04.725140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.801 [2024-11-15 11:31:04.725219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:21.801 [2024-11-15 11:31:04.725238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.801 [2024-11-15 11:31:04.728441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.801 [2024-11-15 11:31:04.728486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.801 [2024-11-15 11:31:04.728669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:21.801 spare 00:19:21.801 [2024-11-15 11:31:04.728739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.801 [2024-11-15 11:31:04.728961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.801 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.801 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:21.801 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.801 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.060 [2024-11-15 11:31:04.829121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:22.060 [2024-11-15 11:31:04.829215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:22.060 [2024-11-15 11:31:04.829698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:22.060 [2024-11-15 11:31:04.830016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:22.060 [2024-11-15 11:31:04.830037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:22.060 [2024-11-15 11:31:04.830336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.060 "name": "raid_bdev1", 00:19:22.060 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:22.060 "strip_size_kb": 0, 00:19:22.060 "state": "online", 00:19:22.060 "raid_level": "raid1", 00:19:22.060 "superblock": true, 00:19:22.060 "num_base_bdevs": 2, 00:19:22.060 "num_base_bdevs_discovered": 2, 00:19:22.060 "num_base_bdevs_operational": 2, 00:19:22.060 "base_bdevs_list": [ 00:19:22.060 { 00:19:22.060 "name": "spare", 00:19:22.060 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:22.060 "is_configured": true, 00:19:22.060 "data_offset": 256, 00:19:22.060 "data_size": 7936 00:19:22.060 }, 00:19:22.060 { 00:19:22.060 "name": "BaseBdev2", 00:19:22.060 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:22.060 "is_configured": true, 00:19:22.060 "data_offset": 256, 00:19:22.060 "data_size": 7936 00:19:22.060 } 00:19:22.060 ] 00:19:22.060 }' 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.060 11:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.628 "name": "raid_bdev1", 00:19:22.628 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:22.628 "strip_size_kb": 0, 00:19:22.628 "state": "online", 00:19:22.628 "raid_level": "raid1", 00:19:22.628 "superblock": true, 00:19:22.628 "num_base_bdevs": 2, 00:19:22.628 "num_base_bdevs_discovered": 2, 00:19:22.628 "num_base_bdevs_operational": 2, 00:19:22.628 "base_bdevs_list": [ 00:19:22.628 { 00:19:22.628 "name": "spare", 00:19:22.628 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:22.628 "is_configured": true, 00:19:22.628 "data_offset": 256, 00:19:22.628 "data_size": 7936 00:19:22.628 }, 00:19:22.628 { 00:19:22.628 "name": "BaseBdev2", 00:19:22.628 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:22.628 "is_configured": true, 00:19:22.628 "data_offset": 256, 00:19:22.628 "data_size": 7936 00:19:22.628 } 00:19:22.628 ] 00:19:22.628 }' 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.628 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.629 [2024-11-15 11:31:05.554474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.629 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.888 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.888 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.888 "name": "raid_bdev1", 00:19:22.888 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:22.888 "strip_size_kb": 0, 00:19:22.888 "state": "online", 00:19:22.888 "raid_level": "raid1", 00:19:22.888 "superblock": true, 00:19:22.888 "num_base_bdevs": 2, 00:19:22.888 "num_base_bdevs_discovered": 1, 00:19:22.888 "num_base_bdevs_operational": 1, 00:19:22.888 "base_bdevs_list": [ 00:19:22.888 { 00:19:22.888 "name": null, 00:19:22.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.888 "is_configured": false, 00:19:22.888 "data_offset": 0, 00:19:22.888 "data_size": 7936 00:19:22.888 }, 00:19:22.888 { 00:19:22.888 "name": "BaseBdev2", 00:19:22.888 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:22.888 "is_configured": true, 00:19:22.888 "data_offset": 256, 00:19:22.888 "data_size": 7936 00:19:22.888 } 00:19:22.888 ] 00:19:22.888 }' 00:19:22.888 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.888 11:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.146 11:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:23.146 11:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.146 11:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.146 [2024-11-15 11:31:06.014703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.146 [2024-11-15 11:31:06.015089] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.146 [2024-11-15 11:31:06.015121] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:23.146 [2024-11-15 11:31:06.015197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.146 [2024-11-15 11:31:06.032004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:23.146 11:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.146 11:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:23.146 [2024-11-15 11:31:06.034851] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.524 "name": "raid_bdev1", 00:19:24.524 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:24.524 "strip_size_kb": 0, 00:19:24.524 "state": "online", 00:19:24.524 "raid_level": "raid1", 00:19:24.524 "superblock": true, 00:19:24.524 "num_base_bdevs": 2, 00:19:24.524 "num_base_bdevs_discovered": 2, 00:19:24.524 "num_base_bdevs_operational": 2, 00:19:24.524 "process": { 00:19:24.524 "type": "rebuild", 00:19:24.524 "target": "spare", 00:19:24.524 "progress": { 00:19:24.524 "blocks": 2560, 00:19:24.524 "percent": 32 00:19:24.524 } 00:19:24.524 }, 00:19:24.524 "base_bdevs_list": [ 00:19:24.524 { 00:19:24.524 "name": "spare", 00:19:24.524 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:24.524 "is_configured": true, 00:19:24.524 "data_offset": 256, 00:19:24.524 "data_size": 7936 00:19:24.524 }, 00:19:24.524 { 00:19:24.524 "name": "BaseBdev2", 00:19:24.524 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:24.524 "is_configured": true, 00:19:24.524 "data_offset": 256, 00:19:24.524 "data_size": 7936 00:19:24.524 } 00:19:24.524 ] 00:19:24.524 }' 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.524 [2024-11-15 11:31:07.188079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.524 [2024-11-15 11:31:07.245722] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:24.524 [2024-11-15 11:31:07.245873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.524 [2024-11-15 11:31:07.245897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.524 [2024-11-15 11:31:07.245911] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.524 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.525 "name": "raid_bdev1", 00:19:24.525 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:24.525 "strip_size_kb": 0, 00:19:24.525 "state": "online", 00:19:24.525 "raid_level": "raid1", 00:19:24.525 "superblock": true, 00:19:24.525 "num_base_bdevs": 2, 00:19:24.525 "num_base_bdevs_discovered": 1, 00:19:24.525 "num_base_bdevs_operational": 1, 00:19:24.525 "base_bdevs_list": [ 00:19:24.525 { 00:19:24.525 "name": null, 00:19:24.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.525 "is_configured": false, 00:19:24.525 "data_offset": 0, 00:19:24.525 "data_size": 7936 00:19:24.525 }, 00:19:24.525 { 00:19:24.525 "name": "BaseBdev2", 00:19:24.525 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:24.525 "is_configured": true, 00:19:24.525 "data_offset": 256, 00:19:24.525 "data_size": 7936 00:19:24.525 } 00:19:24.525 ] 00:19:24.525 }' 00:19:24.525 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.525 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.093 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:25.093 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.093 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.093 [2024-11-15 11:31:07.815532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:25.093 [2024-11-15 11:31:07.815638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.093 [2024-11-15 11:31:07.815674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:25.093 [2024-11-15 11:31:07.815691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.093 [2024-11-15 11:31:07.816426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.093 [2024-11-15 11:31:07.816476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:25.093 [2024-11-15 11:31:07.816620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:25.093 [2024-11-15 11:31:07.816646] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:25.093 [2024-11-15 11:31:07.816662] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:25.093 [2024-11-15 11:31:07.816698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.093 [2024-11-15 11:31:07.832735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:25.093 spare 00:19:25.093 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.093 11:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:25.093 [2024-11-15 11:31:07.835592] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.029 "name": "raid_bdev1", 00:19:26.029 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:26.029 "strip_size_kb": 0, 00:19:26.029 "state": "online", 00:19:26.029 "raid_level": "raid1", 00:19:26.029 "superblock": true, 00:19:26.029 "num_base_bdevs": 2, 00:19:26.029 "num_base_bdevs_discovered": 2, 00:19:26.029 "num_base_bdevs_operational": 2, 00:19:26.029 "process": { 00:19:26.029 "type": "rebuild", 00:19:26.029 "target": "spare", 00:19:26.029 "progress": { 00:19:26.029 "blocks": 2560, 00:19:26.029 "percent": 32 00:19:26.029 } 00:19:26.029 }, 00:19:26.029 "base_bdevs_list": [ 00:19:26.029 { 00:19:26.029 "name": "spare", 00:19:26.029 "uuid": "cc68c919-4e81-5cd5-8fb2-8f2085f78a3c", 00:19:26.029 "is_configured": true, 00:19:26.029 "data_offset": 256, 00:19:26.029 "data_size": 7936 00:19:26.029 }, 00:19:26.029 { 00:19:26.029 "name": "BaseBdev2", 00:19:26.029 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:26.029 "is_configured": true, 00:19:26.029 "data_offset": 256, 00:19:26.029 "data_size": 7936 00:19:26.029 } 00:19:26.029 ] 00:19:26.029 }' 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.029 11:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.288 [2024-11-15 11:31:09.026004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.288 [2024-11-15 11:31:09.046810] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:26.288 [2024-11-15 11:31:09.047066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.288 [2024-11-15 11:31:09.047214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.288 [2024-11-15 11:31:09.047269] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.288 "name": "raid_bdev1", 00:19:26.288 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:26.288 "strip_size_kb": 0, 00:19:26.288 "state": "online", 00:19:26.288 "raid_level": "raid1", 00:19:26.288 "superblock": true, 00:19:26.288 "num_base_bdevs": 2, 00:19:26.288 "num_base_bdevs_discovered": 1, 00:19:26.288 "num_base_bdevs_operational": 1, 00:19:26.288 "base_bdevs_list": [ 00:19:26.288 { 00:19:26.288 "name": null, 00:19:26.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.288 "is_configured": false, 00:19:26.288 "data_offset": 0, 00:19:26.288 "data_size": 7936 00:19:26.288 }, 00:19:26.288 { 00:19:26.288 "name": "BaseBdev2", 00:19:26.288 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:26.288 "is_configured": true, 00:19:26.288 "data_offset": 256, 00:19:26.288 "data_size": 7936 00:19:26.288 } 00:19:26.288 ] 00:19:26.288 }' 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.288 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.856 "name": "raid_bdev1", 00:19:26.856 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:26.856 "strip_size_kb": 0, 00:19:26.856 "state": "online", 00:19:26.856 "raid_level": "raid1", 00:19:26.856 "superblock": true, 00:19:26.856 "num_base_bdevs": 2, 00:19:26.856 "num_base_bdevs_discovered": 1, 00:19:26.856 "num_base_bdevs_operational": 1, 00:19:26.856 "base_bdevs_list": [ 00:19:26.856 { 00:19:26.856 "name": null, 00:19:26.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.856 "is_configured": false, 00:19:26.856 "data_offset": 0, 00:19:26.856 "data_size": 7936 00:19:26.856 }, 00:19:26.856 { 00:19:26.856 "name": "BaseBdev2", 00:19:26.856 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:26.856 "is_configured": true, 00:19:26.856 "data_offset": 256, 00:19:26.856 "data_size": 7936 00:19:26.856 } 00:19:26.856 ] 00:19:26.856 }' 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.856 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.857 [2024-11-15 11:31:09.755788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:26.857 [2024-11-15 11:31:09.756033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.857 [2024-11-15 11:31:09.756122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:26.857 [2024-11-15 11:31:09.756156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.857 [2024-11-15 11:31:09.756824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.857 [2024-11-15 11:31:09.756850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:26.857 [2024-11-15 11:31:09.756954] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:26.857 [2024-11-15 11:31:09.756975] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:26.857 [2024-11-15 11:31:09.756991] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:26.857 [2024-11-15 11:31:09.757005] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:26.857 BaseBdev1 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.857 11:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.234 "name": "raid_bdev1", 00:19:28.234 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:28.234 "strip_size_kb": 0, 00:19:28.234 "state": "online", 00:19:28.234 "raid_level": "raid1", 00:19:28.234 "superblock": true, 00:19:28.234 "num_base_bdevs": 2, 00:19:28.234 "num_base_bdevs_discovered": 1, 00:19:28.234 "num_base_bdevs_operational": 1, 00:19:28.234 "base_bdevs_list": [ 00:19:28.234 { 00:19:28.234 "name": null, 00:19:28.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.234 "is_configured": false, 00:19:28.234 "data_offset": 0, 00:19:28.234 "data_size": 7936 00:19:28.234 }, 00:19:28.234 { 00:19:28.234 "name": "BaseBdev2", 00:19:28.234 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:28.234 "is_configured": true, 00:19:28.234 "data_offset": 256, 00:19:28.234 "data_size": 7936 00:19:28.234 } 00:19:28.234 ] 00:19:28.234 }' 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.234 11:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.492 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.492 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.492 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:28.492 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:28.492 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.493 "name": "raid_bdev1", 00:19:28.493 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:28.493 "strip_size_kb": 0, 00:19:28.493 "state": "online", 00:19:28.493 "raid_level": "raid1", 00:19:28.493 "superblock": true, 00:19:28.493 "num_base_bdevs": 2, 00:19:28.493 "num_base_bdevs_discovered": 1, 00:19:28.493 "num_base_bdevs_operational": 1, 00:19:28.493 "base_bdevs_list": [ 00:19:28.493 { 00:19:28.493 "name": null, 00:19:28.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.493 "is_configured": false, 00:19:28.493 "data_offset": 0, 00:19:28.493 "data_size": 7936 00:19:28.493 }, 00:19:28.493 { 00:19:28.493 "name": "BaseBdev2", 00:19:28.493 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:28.493 "is_configured": true, 00:19:28.493 "data_offset": 256, 00:19:28.493 "data_size": 7936 00:19:28.493 } 00:19:28.493 ] 00:19:28.493 }' 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.493 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.756 [2024-11-15 11:31:11.492458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.756 [2024-11-15 11:31:11.492822] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:28.756 [2024-11-15 11:31:11.492843] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:28.756 request: 00:19:28.756 { 00:19:28.756 "base_bdev": "BaseBdev1", 00:19:28.756 "raid_bdev": "raid_bdev1", 00:19:28.756 "method": "bdev_raid_add_base_bdev", 00:19:28.756 "req_id": 1 00:19:28.756 } 00:19:28.756 Got JSON-RPC error response 00:19:28.756 response: 00:19:28.756 { 00:19:28.756 "code": -22, 00:19:28.756 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:28.756 } 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.756 11:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.699 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.699 "name": "raid_bdev1", 00:19:29.699 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:29.699 "strip_size_kb": 0, 00:19:29.699 "state": "online", 00:19:29.699 "raid_level": "raid1", 00:19:29.699 "superblock": true, 00:19:29.699 "num_base_bdevs": 2, 00:19:29.699 "num_base_bdevs_discovered": 1, 00:19:29.699 "num_base_bdevs_operational": 1, 00:19:29.699 "base_bdevs_list": [ 00:19:29.699 { 00:19:29.699 "name": null, 00:19:29.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.700 "is_configured": false, 00:19:29.700 "data_offset": 0, 00:19:29.700 "data_size": 7936 00:19:29.700 }, 00:19:29.700 { 00:19:29.700 "name": "BaseBdev2", 00:19:29.700 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:29.700 "is_configured": true, 00:19:29.700 "data_offset": 256, 00:19:29.700 "data_size": 7936 00:19:29.700 } 00:19:29.700 ] 00:19:29.700 }' 00:19:29.700 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.700 11:31:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.268 "name": "raid_bdev1", 00:19:30.268 "uuid": "1ffe3e0d-7d28-498d-9114-79b4bfdf1ed2", 00:19:30.268 "strip_size_kb": 0, 00:19:30.268 "state": "online", 00:19:30.268 "raid_level": "raid1", 00:19:30.268 "superblock": true, 00:19:30.268 "num_base_bdevs": 2, 00:19:30.268 "num_base_bdevs_discovered": 1, 00:19:30.268 "num_base_bdevs_operational": 1, 00:19:30.268 "base_bdevs_list": [ 00:19:30.268 { 00:19:30.268 "name": null, 00:19:30.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.268 "is_configured": false, 00:19:30.268 "data_offset": 0, 00:19:30.268 "data_size": 7936 00:19:30.268 }, 00:19:30.268 { 00:19:30.268 "name": "BaseBdev2", 00:19:30.268 "uuid": "38a54868-4065-52fb-9579-87d6ff3b38db", 00:19:30.268 "is_configured": true, 00:19:30.268 "data_offset": 256, 00:19:30.268 "data_size": 7936 00:19:30.268 } 00:19:30.268 ] 00:19:30.268 }' 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86805 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86805 ']' 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86805 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86805 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:30.268 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:30.268 killing process with pid 86805 00:19:30.268 Received shutdown signal, test time was about 60.000000 seconds 00:19:30.268 00:19:30.268 Latency(us) 00:19:30.268 [2024-11-15T11:31:13.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.268 [2024-11-15T11:31:13.218Z] =================================================================================================================== 00:19:30.268 [2024-11-15T11:31:13.218Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.269 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86805' 00:19:30.269 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86805 00:19:30.269 [2024-11-15 11:31:13.200271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.269 11:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86805 00:19:30.269 [2024-11-15 11:31:13.200471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.269 [2024-11-15 11:31:13.200591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.269 [2024-11-15 11:31:13.200610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:30.528 [2024-11-15 11:31:13.449058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:31.905 11:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:31.905 00:19:31.905 real 0m21.809s 00:19:31.905 user 0m29.424s 00:19:31.905 sys 0m2.724s 00:19:31.905 11:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:31.905 11:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 ************************************ 00:19:31.905 END TEST raid_rebuild_test_sb_4k 00:19:31.905 ************************************ 00:19:31.905 11:31:14 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:31.905 11:31:14 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:31.905 11:31:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:31.905 11:31:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:31.905 11:31:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 ************************************ 00:19:31.905 START TEST raid_state_function_test_sb_md_separate 00:19:31.905 ************************************ 00:19:31.905 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:31.905 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:31.905 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:31.906 Process raid pid: 87508 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87508 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87508' 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87508 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87508 ']' 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.906 11:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.906 [2024-11-15 11:31:14.640946] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:31.906 [2024-11-15 11:31:14.641414] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.906 [2024-11-15 11:31:14.830917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.164 [2024-11-15 11:31:14.970588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.422 [2024-11-15 11:31:15.177300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.422 [2024-11-15 11:31:15.177580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.682 [2024-11-15 11:31:15.566822] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:32.682 [2024-11-15 11:31:15.567036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:32.682 [2024-11-15 11:31:15.567063] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.682 [2024-11-15 11:31:15.567081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.682 "name": "Existed_Raid", 00:19:32.682 "uuid": "e33aaf96-edf8-4a92-b57d-7c137dded02d", 00:19:32.682 "strip_size_kb": 0, 00:19:32.682 "state": "configuring", 00:19:32.682 "raid_level": "raid1", 00:19:32.682 "superblock": true, 00:19:32.682 "num_base_bdevs": 2, 00:19:32.682 "num_base_bdevs_discovered": 0, 00:19:32.682 "num_base_bdevs_operational": 2, 00:19:32.682 "base_bdevs_list": [ 00:19:32.682 { 00:19:32.682 "name": "BaseBdev1", 00:19:32.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.682 "is_configured": false, 00:19:32.682 "data_offset": 0, 00:19:32.682 "data_size": 0 00:19:32.682 }, 00:19:32.682 { 00:19:32.682 "name": "BaseBdev2", 00:19:32.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.682 "is_configured": false, 00:19:32.682 "data_offset": 0, 00:19:32.682 "data_size": 0 00:19:32.682 } 00:19:32.682 ] 00:19:32.682 }' 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.682 11:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 [2024-11-15 11:31:16.090979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:33.250 [2024-11-15 11:31:16.091170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 [2024-11-15 11:31:16.098982] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:33.250 [2024-11-15 11:31:16.099216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:33.250 [2024-11-15 11:31:16.099242] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:33.250 [2024-11-15 11:31:16.099264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.250 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 [2024-11-15 11:31:16.151107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:33.251 BaseBdev1 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.251 [ 00:19:33.251 { 00:19:33.251 "name": "BaseBdev1", 00:19:33.251 "aliases": [ 00:19:33.251 "72c362d4-f4cd-4f60-a6c3-74bd7daaf505" 00:19:33.251 ], 00:19:33.251 "product_name": "Malloc disk", 00:19:33.251 "block_size": 4096, 00:19:33.251 "num_blocks": 8192, 00:19:33.251 "uuid": "72c362d4-f4cd-4f60-a6c3-74bd7daaf505", 00:19:33.251 "md_size": 32, 00:19:33.251 "md_interleave": false, 00:19:33.251 "dif_type": 0, 00:19:33.251 "assigned_rate_limits": { 00:19:33.251 "rw_ios_per_sec": 0, 00:19:33.251 "rw_mbytes_per_sec": 0, 00:19:33.251 "r_mbytes_per_sec": 0, 00:19:33.251 "w_mbytes_per_sec": 0 00:19:33.251 }, 00:19:33.251 "claimed": true, 00:19:33.251 "claim_type": "exclusive_write", 00:19:33.251 "zoned": false, 00:19:33.251 "supported_io_types": { 00:19:33.251 "read": true, 00:19:33.251 "write": true, 00:19:33.251 "unmap": true, 00:19:33.251 "flush": true, 00:19:33.251 "reset": true, 00:19:33.251 "nvme_admin": false, 00:19:33.251 "nvme_io": false, 00:19:33.251 "nvme_io_md": false, 00:19:33.251 "write_zeroes": true, 00:19:33.251 "zcopy": true, 00:19:33.251 "get_zone_info": false, 00:19:33.251 "zone_management": false, 00:19:33.251 "zone_append": false, 00:19:33.251 "compare": false, 00:19:33.251 "compare_and_write": false, 00:19:33.251 "abort": true, 00:19:33.251 "seek_hole": false, 00:19:33.251 "seek_data": false, 00:19:33.251 "copy": true, 00:19:33.251 "nvme_iov_md": false 00:19:33.251 }, 00:19:33.251 "memory_domains": [ 00:19:33.251 { 00:19:33.251 "dma_device_id": "system", 00:19:33.251 "dma_device_type": 1 00:19:33.251 }, 00:19:33.251 { 00:19:33.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.251 "dma_device_type": 2 00:19:33.251 } 00:19:33.251 ], 00:19:33.251 "driver_specific": {} 00:19:33.251 } 00:19:33.251 ] 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.251 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.511 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.511 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.511 "name": "Existed_Raid", 00:19:33.511 "uuid": "0bdac442-bb9d-4aa9-b64f-6262f0098d57", 00:19:33.511 "strip_size_kb": 0, 00:19:33.511 "state": "configuring", 00:19:33.511 "raid_level": "raid1", 00:19:33.511 "superblock": true, 00:19:33.511 "num_base_bdevs": 2, 00:19:33.511 "num_base_bdevs_discovered": 1, 00:19:33.511 "num_base_bdevs_operational": 2, 00:19:33.511 "base_bdevs_list": [ 00:19:33.511 { 00:19:33.511 "name": "BaseBdev1", 00:19:33.511 "uuid": "72c362d4-f4cd-4f60-a6c3-74bd7daaf505", 00:19:33.511 "is_configured": true, 00:19:33.511 "data_offset": 256, 00:19:33.511 "data_size": 7936 00:19:33.511 }, 00:19:33.511 { 00:19:33.511 "name": "BaseBdev2", 00:19:33.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.511 "is_configured": false, 00:19:33.511 "data_offset": 0, 00:19:33.511 "data_size": 0 00:19:33.511 } 00:19:33.511 ] 00:19:33.511 }' 00:19:33.511 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.511 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.771 [2024-11-15 11:31:16.699393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:33.771 [2024-11-15 11:31:16.699461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.771 [2024-11-15 11:31:16.711462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:33.771 [2024-11-15 11:31:16.714284] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:33.771 [2024-11-15 11:31:16.714341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.771 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.030 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.030 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.030 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.030 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.030 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.030 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.030 "name": "Existed_Raid", 00:19:34.030 "uuid": "3f5dae08-20a1-41e2-b8ba-e8c9189e3465", 00:19:34.030 "strip_size_kb": 0, 00:19:34.030 "state": "configuring", 00:19:34.030 "raid_level": "raid1", 00:19:34.030 "superblock": true, 00:19:34.030 "num_base_bdevs": 2, 00:19:34.030 "num_base_bdevs_discovered": 1, 00:19:34.030 "num_base_bdevs_operational": 2, 00:19:34.030 "base_bdevs_list": [ 00:19:34.030 { 00:19:34.030 "name": "BaseBdev1", 00:19:34.030 "uuid": "72c362d4-f4cd-4f60-a6c3-74bd7daaf505", 00:19:34.030 "is_configured": true, 00:19:34.030 "data_offset": 256, 00:19:34.030 "data_size": 7936 00:19:34.030 }, 00:19:34.030 { 00:19:34.030 "name": "BaseBdev2", 00:19:34.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.030 "is_configured": false, 00:19:34.030 "data_offset": 0, 00:19:34.030 "data_size": 0 00:19:34.030 } 00:19:34.030 ] 00:19:34.030 }' 00:19:34.030 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.030 11:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.289 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:34.289 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.289 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.549 [2024-11-15 11:31:17.241781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.549 [2024-11-15 11:31:17.242308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:34.549 [2024-11-15 11:31:17.242339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:34.549 [2024-11-15 11:31:17.242446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:34.549 BaseBdev2 00:19:34.549 [2024-11-15 11:31:17.242718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:34.549 [2024-11-15 11:31:17.242738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:34.549 [2024-11-15 11:31:17.242865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.549 [ 00:19:34.549 { 00:19:34.549 "name": "BaseBdev2", 00:19:34.549 "aliases": [ 00:19:34.549 "07917f44-e308-4710-a6c7-cf19d3b4f814" 00:19:34.549 ], 00:19:34.549 "product_name": "Malloc disk", 00:19:34.549 "block_size": 4096, 00:19:34.549 "num_blocks": 8192, 00:19:34.549 "uuid": "07917f44-e308-4710-a6c7-cf19d3b4f814", 00:19:34.549 "md_size": 32, 00:19:34.549 "md_interleave": false, 00:19:34.549 "dif_type": 0, 00:19:34.549 "assigned_rate_limits": { 00:19:34.549 "rw_ios_per_sec": 0, 00:19:34.549 "rw_mbytes_per_sec": 0, 00:19:34.549 "r_mbytes_per_sec": 0, 00:19:34.549 "w_mbytes_per_sec": 0 00:19:34.549 }, 00:19:34.549 "claimed": true, 00:19:34.549 "claim_type": "exclusive_write", 00:19:34.549 "zoned": false, 00:19:34.549 "supported_io_types": { 00:19:34.549 "read": true, 00:19:34.549 "write": true, 00:19:34.549 "unmap": true, 00:19:34.549 "flush": true, 00:19:34.549 "reset": true, 00:19:34.549 "nvme_admin": false, 00:19:34.549 "nvme_io": false, 00:19:34.549 "nvme_io_md": false, 00:19:34.549 "write_zeroes": true, 00:19:34.549 "zcopy": true, 00:19:34.549 "get_zone_info": false, 00:19:34.549 "zone_management": false, 00:19:34.549 "zone_append": false, 00:19:34.549 "compare": false, 00:19:34.549 "compare_and_write": false, 00:19:34.549 "abort": true, 00:19:34.549 "seek_hole": false, 00:19:34.549 "seek_data": false, 00:19:34.549 "copy": true, 00:19:34.549 "nvme_iov_md": false 00:19:34.549 }, 00:19:34.549 "memory_domains": [ 00:19:34.549 { 00:19:34.549 "dma_device_id": "system", 00:19:34.549 "dma_device_type": 1 00:19:34.549 }, 00:19:34.549 { 00:19:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.549 "dma_device_type": 2 00:19:34.549 } 00:19:34.549 ], 00:19:34.549 "driver_specific": {} 00:19:34.549 } 00:19:34.549 ] 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.549 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.550 "name": "Existed_Raid", 00:19:34.550 "uuid": "3f5dae08-20a1-41e2-b8ba-e8c9189e3465", 00:19:34.550 "strip_size_kb": 0, 00:19:34.550 "state": "online", 00:19:34.550 "raid_level": "raid1", 00:19:34.550 "superblock": true, 00:19:34.550 "num_base_bdevs": 2, 00:19:34.550 "num_base_bdevs_discovered": 2, 00:19:34.550 "num_base_bdevs_operational": 2, 00:19:34.550 "base_bdevs_list": [ 00:19:34.550 { 00:19:34.550 "name": "BaseBdev1", 00:19:34.550 "uuid": "72c362d4-f4cd-4f60-a6c3-74bd7daaf505", 00:19:34.550 "is_configured": true, 00:19:34.550 "data_offset": 256, 00:19:34.550 "data_size": 7936 00:19:34.550 }, 00:19:34.550 { 00:19:34.550 "name": "BaseBdev2", 00:19:34.550 "uuid": "07917f44-e308-4710-a6c7-cf19d3b4f814", 00:19:34.550 "is_configured": true, 00:19:34.550 "data_offset": 256, 00:19:34.550 "data_size": 7936 00:19:34.550 } 00:19:34.550 ] 00:19:34.550 }' 00:19:34.550 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.550 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.119 [2024-11-15 11:31:17.806438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.119 "name": "Existed_Raid", 00:19:35.119 "aliases": [ 00:19:35.119 "3f5dae08-20a1-41e2-b8ba-e8c9189e3465" 00:19:35.119 ], 00:19:35.119 "product_name": "Raid Volume", 00:19:35.119 "block_size": 4096, 00:19:35.119 "num_blocks": 7936, 00:19:35.119 "uuid": "3f5dae08-20a1-41e2-b8ba-e8c9189e3465", 00:19:35.119 "md_size": 32, 00:19:35.119 "md_interleave": false, 00:19:35.119 "dif_type": 0, 00:19:35.119 "assigned_rate_limits": { 00:19:35.119 "rw_ios_per_sec": 0, 00:19:35.119 "rw_mbytes_per_sec": 0, 00:19:35.119 "r_mbytes_per_sec": 0, 00:19:35.119 "w_mbytes_per_sec": 0 00:19:35.119 }, 00:19:35.119 "claimed": false, 00:19:35.119 "zoned": false, 00:19:35.119 "supported_io_types": { 00:19:35.119 "read": true, 00:19:35.119 "write": true, 00:19:35.119 "unmap": false, 00:19:35.119 "flush": false, 00:19:35.119 "reset": true, 00:19:35.119 "nvme_admin": false, 00:19:35.119 "nvme_io": false, 00:19:35.119 "nvme_io_md": false, 00:19:35.119 "write_zeroes": true, 00:19:35.119 "zcopy": false, 00:19:35.119 "get_zone_info": false, 00:19:35.119 "zone_management": false, 00:19:35.119 "zone_append": false, 00:19:35.119 "compare": false, 00:19:35.119 "compare_and_write": false, 00:19:35.119 "abort": false, 00:19:35.119 "seek_hole": false, 00:19:35.119 "seek_data": false, 00:19:35.119 "copy": false, 00:19:35.119 "nvme_iov_md": false 00:19:35.119 }, 00:19:35.119 "memory_domains": [ 00:19:35.119 { 00:19:35.119 "dma_device_id": "system", 00:19:35.119 "dma_device_type": 1 00:19:35.119 }, 00:19:35.119 { 00:19:35.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.119 "dma_device_type": 2 00:19:35.119 }, 00:19:35.119 { 00:19:35.119 "dma_device_id": "system", 00:19:35.119 "dma_device_type": 1 00:19:35.119 }, 00:19:35.119 { 00:19:35.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.119 "dma_device_type": 2 00:19:35.119 } 00:19:35.119 ], 00:19:35.119 "driver_specific": { 00:19:35.119 "raid": { 00:19:35.119 "uuid": "3f5dae08-20a1-41e2-b8ba-e8c9189e3465", 00:19:35.119 "strip_size_kb": 0, 00:19:35.119 "state": "online", 00:19:35.119 "raid_level": "raid1", 00:19:35.119 "superblock": true, 00:19:35.119 "num_base_bdevs": 2, 00:19:35.119 "num_base_bdevs_discovered": 2, 00:19:35.119 "num_base_bdevs_operational": 2, 00:19:35.119 "base_bdevs_list": [ 00:19:35.119 { 00:19:35.119 "name": "BaseBdev1", 00:19:35.119 "uuid": "72c362d4-f4cd-4f60-a6c3-74bd7daaf505", 00:19:35.119 "is_configured": true, 00:19:35.119 "data_offset": 256, 00:19:35.119 "data_size": 7936 00:19:35.119 }, 00:19:35.119 { 00:19:35.119 "name": "BaseBdev2", 00:19:35.119 "uuid": "07917f44-e308-4710-a6c7-cf19d3b4f814", 00:19:35.119 "is_configured": true, 00:19:35.119 "data_offset": 256, 00:19:35.119 "data_size": 7936 00:19:35.119 } 00:19:35.119 ] 00:19:35.119 } 00:19:35.119 } 00:19:35.119 }' 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:35.119 BaseBdev2' 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:35.119 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.120 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:35.120 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.120 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.120 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.120 11:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.120 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.120 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.120 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.120 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.120 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:35.120 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.120 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.120 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.379 [2024-11-15 11:31:18.074074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.379 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.380 "name": "Existed_Raid", 00:19:35.380 "uuid": "3f5dae08-20a1-41e2-b8ba-e8c9189e3465", 00:19:35.380 "strip_size_kb": 0, 00:19:35.380 "state": "online", 00:19:35.380 "raid_level": "raid1", 00:19:35.380 "superblock": true, 00:19:35.380 "num_base_bdevs": 2, 00:19:35.380 "num_base_bdevs_discovered": 1, 00:19:35.380 "num_base_bdevs_operational": 1, 00:19:35.380 "base_bdevs_list": [ 00:19:35.380 { 00:19:35.380 "name": null, 00:19:35.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.380 "is_configured": false, 00:19:35.380 "data_offset": 0, 00:19:35.380 "data_size": 7936 00:19:35.380 }, 00:19:35.380 { 00:19:35.380 "name": "BaseBdev2", 00:19:35.380 "uuid": "07917f44-e308-4710-a6c7-cf19d3b4f814", 00:19:35.380 "is_configured": true, 00:19:35.380 "data_offset": 256, 00:19:35.380 "data_size": 7936 00:19:35.380 } 00:19:35.380 ] 00:19:35.380 }' 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.380 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.947 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.947 [2024-11-15 11:31:18.751673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:35.947 [2024-11-15 11:31:18.751838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.947 [2024-11-15 11:31:18.836515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.948 [2024-11-15 11:31:18.836585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:35.948 [2024-11-15 11:31:18.836606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87508 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87508 ']' 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87508 00:19:35.948 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:36.206 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.206 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87508 00:19:36.206 killing process with pid 87508 00:19:36.206 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:36.206 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:36.206 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87508' 00:19:36.206 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87508 00:19:36.206 [2024-11-15 11:31:18.923468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:36.206 11:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87508 00:19:36.206 [2024-11-15 11:31:18.938855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.193 11:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:37.193 00:19:37.193 real 0m5.457s 00:19:37.193 user 0m8.073s 00:19:37.193 sys 0m0.893s 00:19:37.193 11:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:37.193 11:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.193 ************************************ 00:19:37.193 END TEST raid_state_function_test_sb_md_separate 00:19:37.193 ************************************ 00:19:37.193 11:31:20 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:37.193 11:31:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:37.193 11:31:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:37.193 11:31:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.193 ************************************ 00:19:37.193 START TEST raid_superblock_test_md_separate 00:19:37.193 ************************************ 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87765 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87765 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87765 ']' 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.193 11:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.194 11:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.194 11:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.452 [2024-11-15 11:31:20.154922] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:37.452 [2024-11-15 11:31:20.155133] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87765 ] 00:19:37.452 [2024-11-15 11:31:20.342014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.710 [2024-11-15 11:31:20.473395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.968 [2024-11-15 11:31:20.678463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.968 [2024-11-15 11:31:20.678865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.227 malloc1 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.227 [2024-11-15 11:31:21.168502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:38.227 [2024-11-15 11:31:21.168733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.227 [2024-11-15 11:31:21.168806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:38.227 [2024-11-15 11:31:21.168919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.227 [2024-11-15 11:31:21.171589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.227 [2024-11-15 11:31:21.171773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:38.227 pt1 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:38.227 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.487 malloc2 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.487 [2024-11-15 11:31:21.225271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:38.487 [2024-11-15 11:31:21.225486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.487 [2024-11-15 11:31:21.225549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:38.487 [2024-11-15 11:31:21.225565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.487 [2024-11-15 11:31:21.228198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.487 [2024-11-15 11:31:21.228245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:38.487 pt2 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.487 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.488 [2024-11-15 11:31:21.237306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.488 [2024-11-15 11:31:21.239754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.488 [2024-11-15 11:31:21.239959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:38.488 [2024-11-15 11:31:21.239979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:38.488 [2024-11-15 11:31:21.240057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:38.488 [2024-11-15 11:31:21.240233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:38.488 [2024-11-15 11:31:21.240252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:38.488 [2024-11-15 11:31:21.240364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.488 "name": "raid_bdev1", 00:19:38.488 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:38.488 "strip_size_kb": 0, 00:19:38.488 "state": "online", 00:19:38.488 "raid_level": "raid1", 00:19:38.488 "superblock": true, 00:19:38.488 "num_base_bdevs": 2, 00:19:38.488 "num_base_bdevs_discovered": 2, 00:19:38.488 "num_base_bdevs_operational": 2, 00:19:38.488 "base_bdevs_list": [ 00:19:38.488 { 00:19:38.488 "name": "pt1", 00:19:38.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.488 "is_configured": true, 00:19:38.488 "data_offset": 256, 00:19:38.488 "data_size": 7936 00:19:38.488 }, 00:19:38.488 { 00:19:38.488 "name": "pt2", 00:19:38.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.488 "is_configured": true, 00:19:38.488 "data_offset": 256, 00:19:38.488 "data_size": 7936 00:19:38.488 } 00:19:38.488 ] 00:19:38.488 }' 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.488 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.056 [2024-11-15 11:31:21.777843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.056 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.056 "name": "raid_bdev1", 00:19:39.056 "aliases": [ 00:19:39.056 "f8ca46dc-413a-4bb7-a035-2525e5a93963" 00:19:39.056 ], 00:19:39.056 "product_name": "Raid Volume", 00:19:39.056 "block_size": 4096, 00:19:39.056 "num_blocks": 7936, 00:19:39.056 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:39.056 "md_size": 32, 00:19:39.056 "md_interleave": false, 00:19:39.056 "dif_type": 0, 00:19:39.056 "assigned_rate_limits": { 00:19:39.056 "rw_ios_per_sec": 0, 00:19:39.056 "rw_mbytes_per_sec": 0, 00:19:39.056 "r_mbytes_per_sec": 0, 00:19:39.056 "w_mbytes_per_sec": 0 00:19:39.056 }, 00:19:39.056 "claimed": false, 00:19:39.056 "zoned": false, 00:19:39.056 "supported_io_types": { 00:19:39.056 "read": true, 00:19:39.056 "write": true, 00:19:39.056 "unmap": false, 00:19:39.056 "flush": false, 00:19:39.056 "reset": true, 00:19:39.056 "nvme_admin": false, 00:19:39.056 "nvme_io": false, 00:19:39.056 "nvme_io_md": false, 00:19:39.056 "write_zeroes": true, 00:19:39.056 "zcopy": false, 00:19:39.056 "get_zone_info": false, 00:19:39.057 "zone_management": false, 00:19:39.057 "zone_append": false, 00:19:39.057 "compare": false, 00:19:39.057 "compare_and_write": false, 00:19:39.057 "abort": false, 00:19:39.057 "seek_hole": false, 00:19:39.057 "seek_data": false, 00:19:39.057 "copy": false, 00:19:39.057 "nvme_iov_md": false 00:19:39.057 }, 00:19:39.057 "memory_domains": [ 00:19:39.057 { 00:19:39.057 "dma_device_id": "system", 00:19:39.057 "dma_device_type": 1 00:19:39.057 }, 00:19:39.057 { 00:19:39.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.057 "dma_device_type": 2 00:19:39.057 }, 00:19:39.057 { 00:19:39.057 "dma_device_id": "system", 00:19:39.057 "dma_device_type": 1 00:19:39.057 }, 00:19:39.057 { 00:19:39.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.057 "dma_device_type": 2 00:19:39.057 } 00:19:39.057 ], 00:19:39.057 "driver_specific": { 00:19:39.057 "raid": { 00:19:39.057 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:39.057 "strip_size_kb": 0, 00:19:39.057 "state": "online", 00:19:39.057 "raid_level": "raid1", 00:19:39.057 "superblock": true, 00:19:39.057 "num_base_bdevs": 2, 00:19:39.057 "num_base_bdevs_discovered": 2, 00:19:39.057 "num_base_bdevs_operational": 2, 00:19:39.057 "base_bdevs_list": [ 00:19:39.057 { 00:19:39.057 "name": "pt1", 00:19:39.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.057 "is_configured": true, 00:19:39.057 "data_offset": 256, 00:19:39.057 "data_size": 7936 00:19:39.057 }, 00:19:39.057 { 00:19:39.057 "name": "pt2", 00:19:39.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.057 "is_configured": true, 00:19:39.057 "data_offset": 256, 00:19:39.057 "data_size": 7936 00:19:39.057 } 00:19:39.057 ] 00:19:39.057 } 00:19:39.057 } 00:19:39.057 }' 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:39.057 pt2' 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.057 11:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:39.057 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:39.057 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.317 [2024-11-15 11:31:22.065903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f8ca46dc-413a-4bb7-a035-2525e5a93963 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z f8ca46dc-413a-4bb7-a035-2525e5a93963 ']' 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.317 [2024-11-15 11:31:22.121603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.317 [2024-11-15 11:31:22.121640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.317 [2024-11-15 11:31:22.121774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.317 [2024-11-15 11:31:22.121862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.317 [2024-11-15 11:31:22.121882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.317 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.577 [2024-11-15 11:31:22.265732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:39.577 [2024-11-15 11:31:22.268936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:39.577 [2024-11-15 11:31:22.269328] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:39.577 [2024-11-15 11:31:22.269589] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:39.577 [2024-11-15 11:31:22.269821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.577 [2024-11-15 11:31:22.269930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:39.577 request: 00:19:39.577 { 00:19:39.577 "name": "raid_bdev1", 00:19:39.577 "raid_level": "raid1", 00:19:39.577 "base_bdevs": [ 00:19:39.577 "malloc1", 00:19:39.577 "malloc2" 00:19:39.577 ], 00:19:39.577 "superblock": false, 00:19:39.577 "method": "bdev_raid_create", 00:19:39.577 "req_id": 1 00:19:39.577 } 00:19:39.577 Got JSON-RPC error response 00:19:39.577 response: 00:19:39.577 { 00:19:39.577 "code": -17, 00:19:39.577 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:39.577 } 00:19:39.577 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:39.577 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:39.577 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.577 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.577 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.578 [2024-11-15 11:31:22.334272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:39.578 [2024-11-15 11:31:22.334448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.578 [2024-11-15 11:31:22.334518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:39.578 [2024-11-15 11:31:22.334769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.578 [2024-11-15 11:31:22.337659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.578 [2024-11-15 11:31:22.337874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:39.578 [2024-11-15 11:31:22.337943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:39.578 [2024-11-15 11:31:22.338020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:39.578 pt1 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.578 "name": "raid_bdev1", 00:19:39.578 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:39.578 "strip_size_kb": 0, 00:19:39.578 "state": "configuring", 00:19:39.578 "raid_level": "raid1", 00:19:39.578 "superblock": true, 00:19:39.578 "num_base_bdevs": 2, 00:19:39.578 "num_base_bdevs_discovered": 1, 00:19:39.578 "num_base_bdevs_operational": 2, 00:19:39.578 "base_bdevs_list": [ 00:19:39.578 { 00:19:39.578 "name": "pt1", 00:19:39.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.578 "is_configured": true, 00:19:39.578 "data_offset": 256, 00:19:39.578 "data_size": 7936 00:19:39.578 }, 00:19:39.578 { 00:19:39.578 "name": null, 00:19:39.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.578 "is_configured": false, 00:19:39.578 "data_offset": 256, 00:19:39.578 "data_size": 7936 00:19:39.578 } 00:19:39.578 ] 00:19:39.578 }' 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.578 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.148 [2024-11-15 11:31:22.882512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.148 [2024-11-15 11:31:22.882927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.148 [2024-11-15 11:31:22.882984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:40.148 [2024-11-15 11:31:22.883004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.148 [2024-11-15 11:31:22.883358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.148 [2024-11-15 11:31:22.883421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.148 [2024-11-15 11:31:22.883508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:40.148 [2024-11-15 11:31:22.883576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.148 [2024-11-15 11:31:22.883785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:40.148 [2024-11-15 11:31:22.883807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:40.148 [2024-11-15 11:31:22.883910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:40.148 [2024-11-15 11:31:22.884069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:40.148 [2024-11-15 11:31:22.884082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:40.148 [2024-11-15 11:31:22.884283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.148 pt2 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.148 "name": "raid_bdev1", 00:19:40.148 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:40.148 "strip_size_kb": 0, 00:19:40.148 "state": "online", 00:19:40.148 "raid_level": "raid1", 00:19:40.148 "superblock": true, 00:19:40.148 "num_base_bdevs": 2, 00:19:40.148 "num_base_bdevs_discovered": 2, 00:19:40.148 "num_base_bdevs_operational": 2, 00:19:40.148 "base_bdevs_list": [ 00:19:40.148 { 00:19:40.148 "name": "pt1", 00:19:40.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.148 "is_configured": true, 00:19:40.148 "data_offset": 256, 00:19:40.148 "data_size": 7936 00:19:40.148 }, 00:19:40.148 { 00:19:40.148 "name": "pt2", 00:19:40.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.148 "is_configured": true, 00:19:40.148 "data_offset": 256, 00:19:40.148 "data_size": 7936 00:19:40.148 } 00:19:40.148 ] 00:19:40.148 }' 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.148 11:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:40.717 [2024-11-15 11:31:23.435016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:40.717 "name": "raid_bdev1", 00:19:40.717 "aliases": [ 00:19:40.717 "f8ca46dc-413a-4bb7-a035-2525e5a93963" 00:19:40.717 ], 00:19:40.717 "product_name": "Raid Volume", 00:19:40.717 "block_size": 4096, 00:19:40.717 "num_blocks": 7936, 00:19:40.717 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:40.717 "md_size": 32, 00:19:40.717 "md_interleave": false, 00:19:40.717 "dif_type": 0, 00:19:40.717 "assigned_rate_limits": { 00:19:40.717 "rw_ios_per_sec": 0, 00:19:40.717 "rw_mbytes_per_sec": 0, 00:19:40.717 "r_mbytes_per_sec": 0, 00:19:40.717 "w_mbytes_per_sec": 0 00:19:40.717 }, 00:19:40.717 "claimed": false, 00:19:40.717 "zoned": false, 00:19:40.717 "supported_io_types": { 00:19:40.717 "read": true, 00:19:40.717 "write": true, 00:19:40.717 "unmap": false, 00:19:40.717 "flush": false, 00:19:40.717 "reset": true, 00:19:40.717 "nvme_admin": false, 00:19:40.717 "nvme_io": false, 00:19:40.717 "nvme_io_md": false, 00:19:40.717 "write_zeroes": true, 00:19:40.717 "zcopy": false, 00:19:40.717 "get_zone_info": false, 00:19:40.717 "zone_management": false, 00:19:40.717 "zone_append": false, 00:19:40.717 "compare": false, 00:19:40.717 "compare_and_write": false, 00:19:40.717 "abort": false, 00:19:40.717 "seek_hole": false, 00:19:40.717 "seek_data": false, 00:19:40.717 "copy": false, 00:19:40.717 "nvme_iov_md": false 00:19:40.717 }, 00:19:40.717 "memory_domains": [ 00:19:40.717 { 00:19:40.717 "dma_device_id": "system", 00:19:40.717 "dma_device_type": 1 00:19:40.717 }, 00:19:40.717 { 00:19:40.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.717 "dma_device_type": 2 00:19:40.717 }, 00:19:40.717 { 00:19:40.717 "dma_device_id": "system", 00:19:40.717 "dma_device_type": 1 00:19:40.717 }, 00:19:40.717 { 00:19:40.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.717 "dma_device_type": 2 00:19:40.717 } 00:19:40.717 ], 00:19:40.717 "driver_specific": { 00:19:40.717 "raid": { 00:19:40.717 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:40.717 "strip_size_kb": 0, 00:19:40.717 "state": "online", 00:19:40.717 "raid_level": "raid1", 00:19:40.717 "superblock": true, 00:19:40.717 "num_base_bdevs": 2, 00:19:40.717 "num_base_bdevs_discovered": 2, 00:19:40.717 "num_base_bdevs_operational": 2, 00:19:40.717 "base_bdevs_list": [ 00:19:40.717 { 00:19:40.717 "name": "pt1", 00:19:40.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.717 "is_configured": true, 00:19:40.717 "data_offset": 256, 00:19:40.717 "data_size": 7936 00:19:40.717 }, 00:19:40.717 { 00:19:40.717 "name": "pt2", 00:19:40.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.717 "is_configured": true, 00:19:40.717 "data_offset": 256, 00:19:40.717 "data_size": 7936 00:19:40.717 } 00:19:40.717 ] 00:19:40.717 } 00:19:40.717 } 00:19:40.717 }' 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:40.717 pt2' 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.717 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.977 [2024-11-15 11:31:23.719170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' f8ca46dc-413a-4bb7-a035-2525e5a93963 '!=' f8ca46dc-413a-4bb7-a035-2525e5a93963 ']' 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.977 [2024-11-15 11:31:23.766826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.977 "name": "raid_bdev1", 00:19:40.977 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:40.977 "strip_size_kb": 0, 00:19:40.977 "state": "online", 00:19:40.977 "raid_level": "raid1", 00:19:40.977 "superblock": true, 00:19:40.977 "num_base_bdevs": 2, 00:19:40.977 "num_base_bdevs_discovered": 1, 00:19:40.977 "num_base_bdevs_operational": 1, 00:19:40.977 "base_bdevs_list": [ 00:19:40.977 { 00:19:40.977 "name": null, 00:19:40.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.977 "is_configured": false, 00:19:40.977 "data_offset": 0, 00:19:40.977 "data_size": 7936 00:19:40.977 }, 00:19:40.977 { 00:19:40.977 "name": "pt2", 00:19:40.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.977 "is_configured": true, 00:19:40.977 "data_offset": 256, 00:19:40.977 "data_size": 7936 00:19:40.977 } 00:19:40.977 ] 00:19:40.977 }' 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.977 11:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.546 [2024-11-15 11:31:24.303031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.546 [2024-11-15 11:31:24.303072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.546 [2024-11-15 11:31:24.303188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.546 [2024-11-15 11:31:24.303290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.546 [2024-11-15 11:31:24.303313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.546 [2024-11-15 11:31:24.374989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:41.546 [2024-11-15 11:31:24.375277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.546 [2024-11-15 11:31:24.375314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:41.546 [2024-11-15 11:31:24.375335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.546 [2024-11-15 11:31:24.378423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.546 [2024-11-15 11:31:24.378593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:41.546 [2024-11-15 11:31:24.378714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:41.546 [2024-11-15 11:31:24.378787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.546 [2024-11-15 11:31:24.378932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:41.546 [2024-11-15 11:31:24.378954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:41.546 [2024-11-15 11:31:24.379083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:41.546 [2024-11-15 11:31:24.379308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:41.546 [2024-11-15 11:31:24.379324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:41.546 pt2 00:19:41.546 [2024-11-15 11:31:24.379520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:41.546 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.547 "name": "raid_bdev1", 00:19:41.547 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:41.547 "strip_size_kb": 0, 00:19:41.547 "state": "online", 00:19:41.547 "raid_level": "raid1", 00:19:41.547 "superblock": true, 00:19:41.547 "num_base_bdevs": 2, 00:19:41.547 "num_base_bdevs_discovered": 1, 00:19:41.547 "num_base_bdevs_operational": 1, 00:19:41.547 "base_bdevs_list": [ 00:19:41.547 { 00:19:41.547 "name": null, 00:19:41.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.547 "is_configured": false, 00:19:41.547 "data_offset": 256, 00:19:41.547 "data_size": 7936 00:19:41.547 }, 00:19:41.547 { 00:19:41.547 "name": "pt2", 00:19:41.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.547 "is_configured": true, 00:19:41.547 "data_offset": 256, 00:19:41.547 "data_size": 7936 00:19:41.547 } 00:19:41.547 ] 00:19:41.547 }' 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.547 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.115 [2024-11-15 11:31:24.915276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.115 [2024-11-15 11:31:24.915485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:42.115 [2024-11-15 11:31:24.915733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.115 [2024-11-15 11:31:24.915951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.115 [2024-11-15 11:31:24.916082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.115 [2024-11-15 11:31:24.987349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:42.115 [2024-11-15 11:31:24.987438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.115 [2024-11-15 11:31:24.987475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:42.115 [2024-11-15 11:31:24.987491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.115 [2024-11-15 11:31:24.990575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.115 [2024-11-15 11:31:24.990750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:42.115 [2024-11-15 11:31:24.990859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:42.115 [2024-11-15 11:31:24.990928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:42.115 [2024-11-15 11:31:24.991135] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:42.115 [2024-11-15 11:31:24.991154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.115 [2024-11-15 11:31:24.991210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:42.115 [2024-11-15 11:31:24.991316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.115 pt1 00:19:42.115 [2024-11-15 11:31:24.991500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:42.115 [2024-11-15 11:31:24.991517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:42.115 [2024-11-15 11:31:24.991618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:42.115 [2024-11-15 11:31:24.991766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:42.115 [2024-11-15 11:31:24.991785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:42.115 [2024-11-15 11:31:24.991936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.115 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.116 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.116 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.116 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.116 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.116 11:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.116 11:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.116 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.116 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.116 "name": "raid_bdev1", 00:19:42.116 "uuid": "f8ca46dc-413a-4bb7-a035-2525e5a93963", 00:19:42.116 "strip_size_kb": 0, 00:19:42.116 "state": "online", 00:19:42.116 "raid_level": "raid1", 00:19:42.116 "superblock": true, 00:19:42.116 "num_base_bdevs": 2, 00:19:42.116 "num_base_bdevs_discovered": 1, 00:19:42.116 "num_base_bdevs_operational": 1, 00:19:42.116 "base_bdevs_list": [ 00:19:42.116 { 00:19:42.116 "name": null, 00:19:42.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.116 "is_configured": false, 00:19:42.116 "data_offset": 256, 00:19:42.116 "data_size": 7936 00:19:42.116 }, 00:19:42.116 { 00:19:42.116 "name": "pt2", 00:19:42.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.116 "is_configured": true, 00:19:42.116 "data_offset": 256, 00:19:42.116 "data_size": 7936 00:19:42.116 } 00:19:42.116 ] 00:19:42.116 }' 00:19:42.116 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.116 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.684 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.685 [2024-11-15 11:31:25.587855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.685 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.944 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' f8ca46dc-413a-4bb7-a035-2525e5a93963 '!=' f8ca46dc-413a-4bb7-a035-2525e5a93963 ']' 00:19:42.944 11:31:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87765 00:19:42.944 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87765 ']' 00:19:42.944 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87765 00:19:42.944 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:42.944 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:42.945 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87765 00:19:42.945 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:42.945 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:42.945 killing process with pid 87765 00:19:42.945 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87765' 00:19:42.945 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87765 00:19:42.945 [2024-11-15 11:31:25.669486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.945 11:31:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87765 00:19:42.945 [2024-11-15 11:31:25.669604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.945 [2024-11-15 11:31:25.669674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.945 [2024-11-15 11:31:25.669701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:42.945 [2024-11-15 11:31:25.863717] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.325 11:31:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:44.325 00:19:44.325 real 0m6.955s 00:19:44.325 user 0m10.873s 00:19:44.325 sys 0m1.121s 00:19:44.325 11:31:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:44.325 11:31:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.325 ************************************ 00:19:44.325 END TEST raid_superblock_test_md_separate 00:19:44.325 ************************************ 00:19:44.325 11:31:27 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:44.325 11:31:27 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:44.325 11:31:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:44.325 11:31:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:44.325 11:31:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.325 ************************************ 00:19:44.325 START TEST raid_rebuild_test_sb_md_separate 00:19:44.325 ************************************ 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88094 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88094 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88094 ']' 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:44.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:44.325 11:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.325 [2024-11-15 11:31:27.187214] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:44.325 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:44.325 Zero copy mechanism will not be used. 00:19:44.325 [2024-11-15 11:31:27.187444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88094 ] 00:19:44.584 [2024-11-15 11:31:27.372898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.584 [2024-11-15 11:31:27.520160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.844 [2024-11-15 11:31:27.737959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.844 [2024-11-15 11:31:27.738055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.412 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:45.412 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:45.412 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.412 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:45.412 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 BaseBdev1_malloc 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 [2024-11-15 11:31:28.140910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:45.413 [2024-11-15 11:31:28.140997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.413 [2024-11-15 11:31:28.141032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:45.413 [2024-11-15 11:31:28.141052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.413 [2024-11-15 11:31:28.143823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.413 [2024-11-15 11:31:28.143897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.413 BaseBdev1 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 BaseBdev2_malloc 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 [2024-11-15 11:31:28.202993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:45.413 [2024-11-15 11:31:28.203082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.413 [2024-11-15 11:31:28.203114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:45.413 [2024-11-15 11:31:28.203132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.413 [2024-11-15 11:31:28.205821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.413 [2024-11-15 11:31:28.205880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:45.413 BaseBdev2 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 spare_malloc 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 spare_delay 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 [2024-11-15 11:31:28.304934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:45.413 [2024-11-15 11:31:28.305046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.413 [2024-11-15 11:31:28.305086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:45.413 [2024-11-15 11:31:28.305111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.413 [2024-11-15 11:31:28.308631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.413 [2024-11-15 11:31:28.308716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:45.413 spare 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 [2024-11-15 11:31:28.317070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:45.413 [2024-11-15 11:31:28.320366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.413 [2024-11-15 11:31:28.320734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:45.413 [2024-11-15 11:31:28.320776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:45.413 [2024-11-15 11:31:28.320892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:45.413 [2024-11-15 11:31:28.321104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:45.413 [2024-11-15 11:31:28.321136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:45.413 [2024-11-15 11:31:28.321389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.413 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.673 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.673 "name": "raid_bdev1", 00:19:45.673 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:45.673 "strip_size_kb": 0, 00:19:45.673 "state": "online", 00:19:45.673 "raid_level": "raid1", 00:19:45.674 "superblock": true, 00:19:45.674 "num_base_bdevs": 2, 00:19:45.674 "num_base_bdevs_discovered": 2, 00:19:45.674 "num_base_bdevs_operational": 2, 00:19:45.674 "base_bdevs_list": [ 00:19:45.674 { 00:19:45.674 "name": "BaseBdev1", 00:19:45.674 "uuid": "cbe0643a-13b7-540b-ad55-4715b450f407", 00:19:45.674 "is_configured": true, 00:19:45.674 "data_offset": 256, 00:19:45.674 "data_size": 7936 00:19:45.674 }, 00:19:45.674 { 00:19:45.674 "name": "BaseBdev2", 00:19:45.674 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:45.674 "is_configured": true, 00:19:45.674 "data_offset": 256, 00:19:45.674 "data_size": 7936 00:19:45.674 } 00:19:45.674 ] 00:19:45.674 }' 00:19:45.674 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.674 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.933 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:45.933 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.933 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.933 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:45.934 [2024-11-15 11:31:28.865995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.193 11:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:46.452 [2024-11-15 11:31:29.189786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:46.452 /dev/nbd0 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.452 1+0 records in 00:19:46.452 1+0 records out 00:19:46.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374294 s, 10.9 MB/s 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:46.452 11:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:47.395 7936+0 records in 00:19:47.395 7936+0 records out 00:19:47.395 32505856 bytes (33 MB, 31 MiB) copied, 0.896813 s, 36.2 MB/s 00:19:47.395 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:47.395 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:47.395 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:47.395 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:47.395 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:47.395 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:47.395 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:47.655 [2024-11-15 11:31:30.440979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.655 [2024-11-15 11:31:30.453103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.655 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.656 "name": "raid_bdev1", 00:19:47.656 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:47.656 "strip_size_kb": 0, 00:19:47.656 "state": "online", 00:19:47.656 "raid_level": "raid1", 00:19:47.656 "superblock": true, 00:19:47.656 "num_base_bdevs": 2, 00:19:47.656 "num_base_bdevs_discovered": 1, 00:19:47.656 "num_base_bdevs_operational": 1, 00:19:47.656 "base_bdevs_list": [ 00:19:47.656 { 00:19:47.656 "name": null, 00:19:47.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.656 "is_configured": false, 00:19:47.656 "data_offset": 0, 00:19:47.656 "data_size": 7936 00:19:47.656 }, 00:19:47.656 { 00:19:47.656 "name": "BaseBdev2", 00:19:47.656 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:47.656 "is_configured": true, 00:19:47.656 "data_offset": 256, 00:19:47.656 "data_size": 7936 00:19:47.656 } 00:19:47.656 ] 00:19:47.656 }' 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.656 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.224 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.224 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.224 11:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.224 [2024-11-15 11:31:30.993411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.224 [2024-11-15 11:31:31.007917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:48.224 11:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.224 11:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:48.224 [2024-11-15 11:31:31.010911] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.163 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.164 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.164 "name": "raid_bdev1", 00:19:49.164 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:49.164 "strip_size_kb": 0, 00:19:49.164 "state": "online", 00:19:49.164 "raid_level": "raid1", 00:19:49.164 "superblock": true, 00:19:49.164 "num_base_bdevs": 2, 00:19:49.164 "num_base_bdevs_discovered": 2, 00:19:49.164 "num_base_bdevs_operational": 2, 00:19:49.164 "process": { 00:19:49.164 "type": "rebuild", 00:19:49.164 "target": "spare", 00:19:49.164 "progress": { 00:19:49.164 "blocks": 2560, 00:19:49.164 "percent": 32 00:19:49.164 } 00:19:49.164 }, 00:19:49.164 "base_bdevs_list": [ 00:19:49.164 { 00:19:49.164 "name": "spare", 00:19:49.164 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:49.164 "is_configured": true, 00:19:49.164 "data_offset": 256, 00:19:49.164 "data_size": 7936 00:19:49.164 }, 00:19:49.164 { 00:19:49.164 "name": "BaseBdev2", 00:19:49.165 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:49.165 "is_configured": true, 00:19:49.165 "data_offset": 256, 00:19:49.165 "data_size": 7936 00:19:49.165 } 00:19:49.165 ] 00:19:49.165 }' 00:19:49.165 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.424 [2024-11-15 11:31:32.205549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.424 [2024-11-15 11:31:32.222380] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.424 [2024-11-15 11:31:32.222616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.424 [2024-11-15 11:31:32.222653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.424 [2024-11-15 11:31:32.222671] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.424 "name": "raid_bdev1", 00:19:49.424 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:49.424 "strip_size_kb": 0, 00:19:49.424 "state": "online", 00:19:49.424 "raid_level": "raid1", 00:19:49.424 "superblock": true, 00:19:49.424 "num_base_bdevs": 2, 00:19:49.424 "num_base_bdevs_discovered": 1, 00:19:49.424 "num_base_bdevs_operational": 1, 00:19:49.424 "base_bdevs_list": [ 00:19:49.424 { 00:19:49.424 "name": null, 00:19:49.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.424 "is_configured": false, 00:19:49.424 "data_offset": 0, 00:19:49.424 "data_size": 7936 00:19:49.424 }, 00:19:49.424 { 00:19:49.424 "name": "BaseBdev2", 00:19:49.424 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:49.424 "is_configured": true, 00:19:49.424 "data_offset": 256, 00:19:49.424 "data_size": 7936 00:19:49.424 } 00:19:49.424 ] 00:19:49.424 }' 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.424 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.991 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.991 "name": "raid_bdev1", 00:19:49.991 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:49.991 "strip_size_kb": 0, 00:19:49.991 "state": "online", 00:19:49.991 "raid_level": "raid1", 00:19:49.991 "superblock": true, 00:19:49.991 "num_base_bdevs": 2, 00:19:49.991 "num_base_bdevs_discovered": 1, 00:19:49.991 "num_base_bdevs_operational": 1, 00:19:49.991 "base_bdevs_list": [ 00:19:49.991 { 00:19:49.991 "name": null, 00:19:49.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.992 "is_configured": false, 00:19:49.992 "data_offset": 0, 00:19:49.992 "data_size": 7936 00:19:49.992 }, 00:19:49.992 { 00:19:49.992 "name": "BaseBdev2", 00:19:49.992 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:49.992 "is_configured": true, 00:19:49.992 "data_offset": 256, 00:19:49.992 "data_size": 7936 00:19:49.992 } 00:19:49.992 ] 00:19:49.992 }' 00:19:49.992 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.992 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.992 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.250 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.250 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.250 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.250 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.250 [2024-11-15 11:31:32.974044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.250 [2024-11-15 11:31:32.988195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:50.250 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.250 11:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:50.250 [2024-11-15 11:31:32.991094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.188 11:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.188 11:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.188 11:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.188 11:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.188 11:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.188 11:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.188 11:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.188 11:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.188 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.188 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.188 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.188 "name": "raid_bdev1", 00:19:51.188 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:51.188 "strip_size_kb": 0, 00:19:51.188 "state": "online", 00:19:51.188 "raid_level": "raid1", 00:19:51.188 "superblock": true, 00:19:51.188 "num_base_bdevs": 2, 00:19:51.188 "num_base_bdevs_discovered": 2, 00:19:51.188 "num_base_bdevs_operational": 2, 00:19:51.188 "process": { 00:19:51.188 "type": "rebuild", 00:19:51.188 "target": "spare", 00:19:51.188 "progress": { 00:19:51.188 "blocks": 2560, 00:19:51.188 "percent": 32 00:19:51.188 } 00:19:51.188 }, 00:19:51.188 "base_bdevs_list": [ 00:19:51.188 { 00:19:51.188 "name": "spare", 00:19:51.188 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:51.188 "is_configured": true, 00:19:51.188 "data_offset": 256, 00:19:51.188 "data_size": 7936 00:19:51.188 }, 00:19:51.188 { 00:19:51.188 "name": "BaseBdev2", 00:19:51.188 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:51.188 "is_configured": true, 00:19:51.188 "data_offset": 256, 00:19:51.188 "data_size": 7936 00:19:51.188 } 00:19:51.188 ] 00:19:51.188 }' 00:19:51.188 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.188 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.188 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:51.447 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=771 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.447 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.448 "name": "raid_bdev1", 00:19:51.448 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:51.448 "strip_size_kb": 0, 00:19:51.448 "state": "online", 00:19:51.448 "raid_level": "raid1", 00:19:51.448 "superblock": true, 00:19:51.448 "num_base_bdevs": 2, 00:19:51.448 "num_base_bdevs_discovered": 2, 00:19:51.448 "num_base_bdevs_operational": 2, 00:19:51.448 "process": { 00:19:51.448 "type": "rebuild", 00:19:51.448 "target": "spare", 00:19:51.448 "progress": { 00:19:51.448 "blocks": 2816, 00:19:51.448 "percent": 35 00:19:51.448 } 00:19:51.448 }, 00:19:51.448 "base_bdevs_list": [ 00:19:51.448 { 00:19:51.448 "name": "spare", 00:19:51.448 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:51.448 "is_configured": true, 00:19:51.448 "data_offset": 256, 00:19:51.448 "data_size": 7936 00:19:51.448 }, 00:19:51.448 { 00:19:51.448 "name": "BaseBdev2", 00:19:51.448 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:51.448 "is_configured": true, 00:19:51.448 "data_offset": 256, 00:19:51.448 "data_size": 7936 00:19:51.448 } 00:19:51.448 ] 00:19:51.448 }' 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.448 11:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.826 "name": "raid_bdev1", 00:19:52.826 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:52.826 "strip_size_kb": 0, 00:19:52.826 "state": "online", 00:19:52.826 "raid_level": "raid1", 00:19:52.826 "superblock": true, 00:19:52.826 "num_base_bdevs": 2, 00:19:52.826 "num_base_bdevs_discovered": 2, 00:19:52.826 "num_base_bdevs_operational": 2, 00:19:52.826 "process": { 00:19:52.826 "type": "rebuild", 00:19:52.826 "target": "spare", 00:19:52.826 "progress": { 00:19:52.826 "blocks": 5888, 00:19:52.826 "percent": 74 00:19:52.826 } 00:19:52.826 }, 00:19:52.826 "base_bdevs_list": [ 00:19:52.826 { 00:19:52.826 "name": "spare", 00:19:52.826 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:52.826 "is_configured": true, 00:19:52.826 "data_offset": 256, 00:19:52.826 "data_size": 7936 00:19:52.826 }, 00:19:52.826 { 00:19:52.826 "name": "BaseBdev2", 00:19:52.826 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:52.826 "is_configured": true, 00:19:52.826 "data_offset": 256, 00:19:52.826 "data_size": 7936 00:19:52.826 } 00:19:52.826 ] 00:19:52.826 }' 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.826 11:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.394 [2024-11-15 11:31:36.120213] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:53.394 [2024-11-15 11:31:36.120344] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:53.394 [2024-11-15 11:31:36.120527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.653 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.653 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.654 "name": "raid_bdev1", 00:19:53.654 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:53.654 "strip_size_kb": 0, 00:19:53.654 "state": "online", 00:19:53.654 "raid_level": "raid1", 00:19:53.654 "superblock": true, 00:19:53.654 "num_base_bdevs": 2, 00:19:53.654 "num_base_bdevs_discovered": 2, 00:19:53.654 "num_base_bdevs_operational": 2, 00:19:53.654 "base_bdevs_list": [ 00:19:53.654 { 00:19:53.654 "name": "spare", 00:19:53.654 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:53.654 "is_configured": true, 00:19:53.654 "data_offset": 256, 00:19:53.654 "data_size": 7936 00:19:53.654 }, 00:19:53.654 { 00:19:53.654 "name": "BaseBdev2", 00:19:53.654 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:53.654 "is_configured": true, 00:19:53.654 "data_offset": 256, 00:19:53.654 "data_size": 7936 00:19:53.654 } 00:19:53.654 ] 00:19:53.654 }' 00:19:53.654 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.913 "name": "raid_bdev1", 00:19:53.913 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:53.913 "strip_size_kb": 0, 00:19:53.913 "state": "online", 00:19:53.913 "raid_level": "raid1", 00:19:53.913 "superblock": true, 00:19:53.913 "num_base_bdevs": 2, 00:19:53.913 "num_base_bdevs_discovered": 2, 00:19:53.913 "num_base_bdevs_operational": 2, 00:19:53.913 "base_bdevs_list": [ 00:19:53.913 { 00:19:53.913 "name": "spare", 00:19:53.913 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:53.913 "is_configured": true, 00:19:53.913 "data_offset": 256, 00:19:53.913 "data_size": 7936 00:19:53.913 }, 00:19:53.913 { 00:19:53.913 "name": "BaseBdev2", 00:19:53.913 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:53.913 "is_configured": true, 00:19:53.913 "data_offset": 256, 00:19:53.913 "data_size": 7936 00:19:53.913 } 00:19:53.913 ] 00:19:53.913 }' 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.913 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.172 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.172 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.172 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.172 "name": "raid_bdev1", 00:19:54.172 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:54.172 "strip_size_kb": 0, 00:19:54.172 "state": "online", 00:19:54.172 "raid_level": "raid1", 00:19:54.172 "superblock": true, 00:19:54.172 "num_base_bdevs": 2, 00:19:54.172 "num_base_bdevs_discovered": 2, 00:19:54.172 "num_base_bdevs_operational": 2, 00:19:54.172 "base_bdevs_list": [ 00:19:54.172 { 00:19:54.172 "name": "spare", 00:19:54.172 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:54.172 "is_configured": true, 00:19:54.172 "data_offset": 256, 00:19:54.172 "data_size": 7936 00:19:54.172 }, 00:19:54.172 { 00:19:54.172 "name": "BaseBdev2", 00:19:54.172 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:54.172 "is_configured": true, 00:19:54.172 "data_offset": 256, 00:19:54.172 "data_size": 7936 00:19:54.172 } 00:19:54.172 ] 00:19:54.172 }' 00:19:54.172 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.172 11:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.739 [2024-11-15 11:31:37.408792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:54.739 [2024-11-15 11:31:37.408830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.739 [2024-11-15 11:31:37.408947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.739 [2024-11-15 11:31:37.409040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.739 [2024-11-15 11:31:37.409055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:54.739 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:54.740 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:54.999 /dev/nbd0 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.999 1+0 records in 00:19:54.999 1+0 records out 00:19:54.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033866 s, 12.1 MB/s 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:54.999 11:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:55.258 /dev/nbd1 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:55.258 1+0 records in 00:19:55.258 1+0 records out 00:19:55.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371907 s, 11.0 MB/s 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:55.258 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:55.517 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:55.517 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:55.517 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:55.517 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:55.517 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:55.517 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.517 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.776 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.035 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.294 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.294 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:56.294 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.294 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.294 [2024-11-15 11:31:38.989932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:56.294 [2024-11-15 11:31:38.990014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.294 [2024-11-15 11:31:38.990051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:56.294 [2024-11-15 11:31:38.990066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.294 [2024-11-15 11:31:38.993097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.294 [2024-11-15 11:31:38.993140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:56.294 [2024-11-15 11:31:38.993290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:56.294 [2024-11-15 11:31:38.993361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:56.294 [2024-11-15 11:31:38.993561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.294 spare 00:19:56.294 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.294 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:56.294 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.294 11:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.294 [2024-11-15 11:31:39.093715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:56.294 [2024-11-15 11:31:39.093759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:56.294 [2024-11-15 11:31:39.093900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:56.294 [2024-11-15 11:31:39.094093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:56.294 [2024-11-15 11:31:39.094110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:56.294 [2024-11-15 11:31:39.094380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.294 "name": "raid_bdev1", 00:19:56.294 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:56.294 "strip_size_kb": 0, 00:19:56.294 "state": "online", 00:19:56.294 "raid_level": "raid1", 00:19:56.294 "superblock": true, 00:19:56.294 "num_base_bdevs": 2, 00:19:56.294 "num_base_bdevs_discovered": 2, 00:19:56.294 "num_base_bdevs_operational": 2, 00:19:56.294 "base_bdevs_list": [ 00:19:56.294 { 00:19:56.294 "name": "spare", 00:19:56.294 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:56.294 "is_configured": true, 00:19:56.294 "data_offset": 256, 00:19:56.294 "data_size": 7936 00:19:56.294 }, 00:19:56.294 { 00:19:56.294 "name": "BaseBdev2", 00:19:56.294 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:56.294 "is_configured": true, 00:19:56.294 "data_offset": 256, 00:19:56.294 "data_size": 7936 00:19:56.294 } 00:19:56.294 ] 00:19:56.294 }' 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.294 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.914 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.915 "name": "raid_bdev1", 00:19:56.915 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:56.915 "strip_size_kb": 0, 00:19:56.915 "state": "online", 00:19:56.915 "raid_level": "raid1", 00:19:56.915 "superblock": true, 00:19:56.915 "num_base_bdevs": 2, 00:19:56.915 "num_base_bdevs_discovered": 2, 00:19:56.915 "num_base_bdevs_operational": 2, 00:19:56.915 "base_bdevs_list": [ 00:19:56.915 { 00:19:56.915 "name": "spare", 00:19:56.915 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:56.915 "is_configured": true, 00:19:56.915 "data_offset": 256, 00:19:56.915 "data_size": 7936 00:19:56.915 }, 00:19:56.915 { 00:19:56.915 "name": "BaseBdev2", 00:19:56.915 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:56.915 "is_configured": true, 00:19:56.915 "data_offset": 256, 00:19:56.915 "data_size": 7936 00:19:56.915 } 00:19:56.915 ] 00:19:56.915 }' 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.915 [2024-11-15 11:31:39.846642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.915 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.174 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.174 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.174 "name": "raid_bdev1", 00:19:57.174 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:57.174 "strip_size_kb": 0, 00:19:57.174 "state": "online", 00:19:57.174 "raid_level": "raid1", 00:19:57.174 "superblock": true, 00:19:57.174 "num_base_bdevs": 2, 00:19:57.174 "num_base_bdevs_discovered": 1, 00:19:57.174 "num_base_bdevs_operational": 1, 00:19:57.174 "base_bdevs_list": [ 00:19:57.174 { 00:19:57.174 "name": null, 00:19:57.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.174 "is_configured": false, 00:19:57.174 "data_offset": 0, 00:19:57.174 "data_size": 7936 00:19:57.174 }, 00:19:57.174 { 00:19:57.174 "name": "BaseBdev2", 00:19:57.174 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:57.174 "is_configured": true, 00:19:57.174 "data_offset": 256, 00:19:57.174 "data_size": 7936 00:19:57.174 } 00:19:57.174 ] 00:19:57.174 }' 00:19:57.174 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.174 11:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.742 11:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:57.742 11:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.742 11:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.742 [2024-11-15 11:31:40.398927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.742 [2024-11-15 11:31:40.399437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:57.742 [2024-11-15 11:31:40.399474] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:57.742 [2024-11-15 11:31:40.399529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.742 [2024-11-15 11:31:40.413236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:57.742 11:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.742 11:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:57.742 [2024-11-15 11:31:40.416163] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.678 "name": "raid_bdev1", 00:19:58.678 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:58.678 "strip_size_kb": 0, 00:19:58.678 "state": "online", 00:19:58.678 "raid_level": "raid1", 00:19:58.678 "superblock": true, 00:19:58.678 "num_base_bdevs": 2, 00:19:58.678 "num_base_bdevs_discovered": 2, 00:19:58.678 "num_base_bdevs_operational": 2, 00:19:58.678 "process": { 00:19:58.678 "type": "rebuild", 00:19:58.678 "target": "spare", 00:19:58.678 "progress": { 00:19:58.678 "blocks": 2560, 00:19:58.678 "percent": 32 00:19:58.678 } 00:19:58.678 }, 00:19:58.678 "base_bdevs_list": [ 00:19:58.678 { 00:19:58.678 "name": "spare", 00:19:58.678 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:19:58.678 "is_configured": true, 00:19:58.678 "data_offset": 256, 00:19:58.678 "data_size": 7936 00:19:58.678 }, 00:19:58.678 { 00:19:58.678 "name": "BaseBdev2", 00:19:58.678 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:58.678 "is_configured": true, 00:19:58.678 "data_offset": 256, 00:19:58.678 "data_size": 7936 00:19:58.678 } 00:19:58.678 ] 00:19:58.678 }' 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.678 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 [2024-11-15 11:31:41.593777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.938 [2024-11-15 11:31:41.627352] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:58.938 [2024-11-15 11:31:41.627592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.938 [2024-11-15 11:31:41.627622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.938 [2024-11-15 11:31:41.627651] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.938 "name": "raid_bdev1", 00:19:58.938 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:19:58.938 "strip_size_kb": 0, 00:19:58.938 "state": "online", 00:19:58.938 "raid_level": "raid1", 00:19:58.938 "superblock": true, 00:19:58.938 "num_base_bdevs": 2, 00:19:58.938 "num_base_bdevs_discovered": 1, 00:19:58.938 "num_base_bdevs_operational": 1, 00:19:58.938 "base_bdevs_list": [ 00:19:58.938 { 00:19:58.938 "name": null, 00:19:58.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.938 "is_configured": false, 00:19:58.938 "data_offset": 0, 00:19:58.938 "data_size": 7936 00:19:58.938 }, 00:19:58.938 { 00:19:58.938 "name": "BaseBdev2", 00:19:58.938 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:19:58.938 "is_configured": true, 00:19:58.938 "data_offset": 256, 00:19:58.938 "data_size": 7936 00:19:58.938 } 00:19:58.938 ] 00:19:58.938 }' 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.938 11:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.505 11:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:59.505 11:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.505 11:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.505 [2024-11-15 11:31:42.203814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:59.505 [2024-11-15 11:31:42.203930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.505 [2024-11-15 11:31:42.203968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:59.505 [2024-11-15 11:31:42.204004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.505 [2024-11-15 11:31:42.204432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.505 [2024-11-15 11:31:42.204464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:59.506 [2024-11-15 11:31:42.204551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:59.506 [2024-11-15 11:31:42.204590] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:59.506 [2024-11-15 11:31:42.204620] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:59.506 [2024-11-15 11:31:42.204653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.506 [2024-11-15 11:31:42.217707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:59.506 spare 00:19:59.506 11:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.506 11:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:59.506 [2024-11-15 11:31:42.220485] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.442 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.442 "name": "raid_bdev1", 00:20:00.442 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:20:00.442 "strip_size_kb": 0, 00:20:00.442 "state": "online", 00:20:00.442 "raid_level": "raid1", 00:20:00.442 "superblock": true, 00:20:00.442 "num_base_bdevs": 2, 00:20:00.442 "num_base_bdevs_discovered": 2, 00:20:00.442 "num_base_bdevs_operational": 2, 00:20:00.442 "process": { 00:20:00.442 "type": "rebuild", 00:20:00.442 "target": "spare", 00:20:00.442 "progress": { 00:20:00.442 "blocks": 2560, 00:20:00.442 "percent": 32 00:20:00.442 } 00:20:00.442 }, 00:20:00.442 "base_bdevs_list": [ 00:20:00.442 { 00:20:00.442 "name": "spare", 00:20:00.442 "uuid": "ae4a6b96-375a-511b-adc0-c695c6c005e5", 00:20:00.442 "is_configured": true, 00:20:00.442 "data_offset": 256, 00:20:00.442 "data_size": 7936 00:20:00.443 }, 00:20:00.443 { 00:20:00.443 "name": "BaseBdev2", 00:20:00.443 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:20:00.443 "is_configured": true, 00:20:00.443 "data_offset": 256, 00:20:00.443 "data_size": 7936 00:20:00.443 } 00:20:00.443 ] 00:20:00.443 }' 00:20:00.443 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.443 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.443 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.443 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.443 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:00.443 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.443 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.702 [2024-11-15 11:31:43.393896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.702 [2024-11-15 11:31:43.431536] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:00.702 [2024-11-15 11:31:43.431871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.702 [2024-11-15 11:31:43.432209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.702 [2024-11-15 11:31:43.432273] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.702 "name": "raid_bdev1", 00:20:00.702 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:20:00.702 "strip_size_kb": 0, 00:20:00.702 "state": "online", 00:20:00.702 "raid_level": "raid1", 00:20:00.702 "superblock": true, 00:20:00.702 "num_base_bdevs": 2, 00:20:00.702 "num_base_bdevs_discovered": 1, 00:20:00.702 "num_base_bdevs_operational": 1, 00:20:00.702 "base_bdevs_list": [ 00:20:00.702 { 00:20:00.702 "name": null, 00:20:00.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.702 "is_configured": false, 00:20:00.702 "data_offset": 0, 00:20:00.702 "data_size": 7936 00:20:00.702 }, 00:20:00.702 { 00:20:00.702 "name": "BaseBdev2", 00:20:00.702 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:20:00.702 "is_configured": true, 00:20:00.702 "data_offset": 256, 00:20:00.702 "data_size": 7936 00:20:00.702 } 00:20:00.702 ] 00:20:00.702 }' 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.702 11:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.269 "name": "raid_bdev1", 00:20:01.269 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:20:01.269 "strip_size_kb": 0, 00:20:01.269 "state": "online", 00:20:01.269 "raid_level": "raid1", 00:20:01.269 "superblock": true, 00:20:01.269 "num_base_bdevs": 2, 00:20:01.269 "num_base_bdevs_discovered": 1, 00:20:01.269 "num_base_bdevs_operational": 1, 00:20:01.269 "base_bdevs_list": [ 00:20:01.269 { 00:20:01.269 "name": null, 00:20:01.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.269 "is_configured": false, 00:20:01.269 "data_offset": 0, 00:20:01.269 "data_size": 7936 00:20:01.269 }, 00:20:01.269 { 00:20:01.269 "name": "BaseBdev2", 00:20:01.269 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:20:01.269 "is_configured": true, 00:20:01.269 "data_offset": 256, 00:20:01.269 "data_size": 7936 00:20:01.269 } 00:20:01.269 ] 00:20:01.269 }' 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.269 [2024-11-15 11:31:44.181094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:01.269 [2024-11-15 11:31:44.181217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.269 [2024-11-15 11:31:44.181258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:01.269 [2024-11-15 11:31:44.181275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.269 [2024-11-15 11:31:44.181960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.269 [2024-11-15 11:31:44.181984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:01.269 [2024-11-15 11:31:44.182082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:01.269 [2024-11-15 11:31:44.182114] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:01.269 [2024-11-15 11:31:44.182133] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:01.269 [2024-11-15 11:31:44.182156] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:01.269 BaseBdev1 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.269 11:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.648 "name": "raid_bdev1", 00:20:02.648 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:20:02.648 "strip_size_kb": 0, 00:20:02.648 "state": "online", 00:20:02.648 "raid_level": "raid1", 00:20:02.648 "superblock": true, 00:20:02.648 "num_base_bdevs": 2, 00:20:02.648 "num_base_bdevs_discovered": 1, 00:20:02.648 "num_base_bdevs_operational": 1, 00:20:02.648 "base_bdevs_list": [ 00:20:02.648 { 00:20:02.648 "name": null, 00:20:02.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.648 "is_configured": false, 00:20:02.648 "data_offset": 0, 00:20:02.648 "data_size": 7936 00:20:02.648 }, 00:20:02.648 { 00:20:02.648 "name": "BaseBdev2", 00:20:02.648 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:20:02.648 "is_configured": true, 00:20:02.648 "data_offset": 256, 00:20:02.648 "data_size": 7936 00:20:02.648 } 00:20:02.648 ] 00:20:02.648 }' 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.648 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.907 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.907 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.907 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.907 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.908 "name": "raid_bdev1", 00:20:02.908 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:20:02.908 "strip_size_kb": 0, 00:20:02.908 "state": "online", 00:20:02.908 "raid_level": "raid1", 00:20:02.908 "superblock": true, 00:20:02.908 "num_base_bdevs": 2, 00:20:02.908 "num_base_bdevs_discovered": 1, 00:20:02.908 "num_base_bdevs_operational": 1, 00:20:02.908 "base_bdevs_list": [ 00:20:02.908 { 00:20:02.908 "name": null, 00:20:02.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.908 "is_configured": false, 00:20:02.908 "data_offset": 0, 00:20:02.908 "data_size": 7936 00:20:02.908 }, 00:20:02.908 { 00:20:02.908 "name": "BaseBdev2", 00:20:02.908 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:20:02.908 "is_configured": true, 00:20:02.908 "data_offset": 256, 00:20:02.908 "data_size": 7936 00:20:02.908 } 00:20:02.908 ] 00:20:02.908 }' 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:02.908 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.173 [2024-11-15 11:31:45.861693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.173 [2024-11-15 11:31:45.861939] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:03.173 [2024-11-15 11:31:45.861962] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:03.173 request: 00:20:03.173 { 00:20:03.173 "base_bdev": "BaseBdev1", 00:20:03.173 "raid_bdev": "raid_bdev1", 00:20:03.173 "method": "bdev_raid_add_base_bdev", 00:20:03.173 "req_id": 1 00:20:03.173 } 00:20:03.173 Got JSON-RPC error response 00:20:03.173 response: 00:20:03.173 { 00:20:03.173 "code": -22, 00:20:03.173 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:03.173 } 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.173 11:31:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:04.121 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.121 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.121 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.121 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.121 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.121 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.121 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.122 "name": "raid_bdev1", 00:20:04.122 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:20:04.122 "strip_size_kb": 0, 00:20:04.122 "state": "online", 00:20:04.122 "raid_level": "raid1", 00:20:04.122 "superblock": true, 00:20:04.122 "num_base_bdevs": 2, 00:20:04.122 "num_base_bdevs_discovered": 1, 00:20:04.122 "num_base_bdevs_operational": 1, 00:20:04.122 "base_bdevs_list": [ 00:20:04.122 { 00:20:04.122 "name": null, 00:20:04.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.122 "is_configured": false, 00:20:04.122 "data_offset": 0, 00:20:04.122 "data_size": 7936 00:20:04.122 }, 00:20:04.122 { 00:20:04.122 "name": "BaseBdev2", 00:20:04.122 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:20:04.122 "is_configured": true, 00:20:04.122 "data_offset": 256, 00:20:04.122 "data_size": 7936 00:20:04.122 } 00:20:04.122 ] 00:20:04.122 }' 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.122 11:31:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.689 "name": "raid_bdev1", 00:20:04.689 "uuid": "8b1adc74-9130-41c3-96f4-f7be84098712", 00:20:04.689 "strip_size_kb": 0, 00:20:04.689 "state": "online", 00:20:04.689 "raid_level": "raid1", 00:20:04.689 "superblock": true, 00:20:04.689 "num_base_bdevs": 2, 00:20:04.689 "num_base_bdevs_discovered": 1, 00:20:04.689 "num_base_bdevs_operational": 1, 00:20:04.689 "base_bdevs_list": [ 00:20:04.689 { 00:20:04.689 "name": null, 00:20:04.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.689 "is_configured": false, 00:20:04.689 "data_offset": 0, 00:20:04.689 "data_size": 7936 00:20:04.689 }, 00:20:04.689 { 00:20:04.689 "name": "BaseBdev2", 00:20:04.689 "uuid": "18a2319d-0a76-56c0-a7ea-2867e4894d93", 00:20:04.689 "is_configured": true, 00:20:04.689 "data_offset": 256, 00:20:04.689 "data_size": 7936 00:20:04.689 } 00:20:04.689 ] 00:20:04.689 }' 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88094 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88094 ']' 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88094 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88094 00:20:04.689 killing process with pid 88094 00:20:04.689 Received shutdown signal, test time was about 60.000000 seconds 00:20:04.689 00:20:04.689 Latency(us) 00:20:04.689 [2024-11-15T11:31:47.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.689 [2024-11-15T11:31:47.639Z] =================================================================================================================== 00:20:04.689 [2024-11-15T11:31:47.639Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88094' 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88094 00:20:04.689 [2024-11-15 11:31:47.619776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:04.689 11:31:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88094 00:20:04.689 [2024-11-15 11:31:47.619958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.689 [2024-11-15 11:31:47.620037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.689 [2024-11-15 11:31:47.620107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:05.255 [2024-11-15 11:31:47.905156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:06.191 11:31:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:06.191 ************************************ 00:20:06.191 END TEST raid_rebuild_test_sb_md_separate 00:20:06.191 ************************************ 00:20:06.191 00:20:06.191 real 0m21.899s 00:20:06.191 user 0m29.688s 00:20:06.191 sys 0m2.587s 00:20:06.191 11:31:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:06.191 11:31:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:06.191 11:31:49 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:06.191 11:31:49 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:06.191 11:31:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:06.191 11:31:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:06.191 11:31:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:06.191 ************************************ 00:20:06.191 START TEST raid_state_function_test_sb_md_interleaved 00:20:06.191 ************************************ 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:06.191 Process raid pid: 88801 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:06.191 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88801 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88801' 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88801 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88801 ']' 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:06.192 11:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.192 [2024-11-15 11:31:49.122253] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:06.192 [2024-11-15 11:31:49.122579] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.450 [2024-11-15 11:31:49.300290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.709 [2024-11-15 11:31:49.445664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.969 [2024-11-15 11:31:49.660472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.969 [2024-11-15 11:31:49.660897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.228 [2024-11-15 11:31:50.139484] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:07.228 [2024-11-15 11:31:50.139573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:07.228 [2024-11-15 11:31:50.139606] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.228 [2024-11-15 11:31:50.139623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.228 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.487 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.487 "name": "Existed_Raid", 00:20:07.487 "uuid": "bbd77c9e-dd90-46a3-afed-5bf4e6fd2ed1", 00:20:07.487 "strip_size_kb": 0, 00:20:07.487 "state": "configuring", 00:20:07.487 "raid_level": "raid1", 00:20:07.487 "superblock": true, 00:20:07.487 "num_base_bdevs": 2, 00:20:07.487 "num_base_bdevs_discovered": 0, 00:20:07.487 "num_base_bdevs_operational": 2, 00:20:07.487 "base_bdevs_list": [ 00:20:07.487 { 00:20:07.487 "name": "BaseBdev1", 00:20:07.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.487 "is_configured": false, 00:20:07.487 "data_offset": 0, 00:20:07.487 "data_size": 0 00:20:07.487 }, 00:20:07.487 { 00:20:07.487 "name": "BaseBdev2", 00:20:07.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.487 "is_configured": false, 00:20:07.487 "data_offset": 0, 00:20:07.487 "data_size": 0 00:20:07.487 } 00:20:07.487 ] 00:20:07.487 }' 00:20:07.487 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.487 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.747 [2024-11-15 11:31:50.627625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.747 [2024-11-15 11:31:50.627668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.747 [2024-11-15 11:31:50.635554] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:07.747 [2024-11-15 11:31:50.635666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:07.747 [2024-11-15 11:31:50.635682] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.747 [2024-11-15 11:31:50.635701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.747 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.748 [2024-11-15 11:31:50.681201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.748 BaseBdev1 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.748 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.007 [ 00:20:08.007 { 00:20:08.007 "name": "BaseBdev1", 00:20:08.007 "aliases": [ 00:20:08.007 "4b6bd692-ea72-4ec6-8741-b4e24daeb188" 00:20:08.007 ], 00:20:08.007 "product_name": "Malloc disk", 00:20:08.007 "block_size": 4128, 00:20:08.007 "num_blocks": 8192, 00:20:08.007 "uuid": "4b6bd692-ea72-4ec6-8741-b4e24daeb188", 00:20:08.007 "md_size": 32, 00:20:08.007 "md_interleave": true, 00:20:08.007 "dif_type": 0, 00:20:08.007 "assigned_rate_limits": { 00:20:08.007 "rw_ios_per_sec": 0, 00:20:08.007 "rw_mbytes_per_sec": 0, 00:20:08.007 "r_mbytes_per_sec": 0, 00:20:08.007 "w_mbytes_per_sec": 0 00:20:08.007 }, 00:20:08.007 "claimed": true, 00:20:08.007 "claim_type": "exclusive_write", 00:20:08.007 "zoned": false, 00:20:08.007 "supported_io_types": { 00:20:08.007 "read": true, 00:20:08.007 "write": true, 00:20:08.007 "unmap": true, 00:20:08.007 "flush": true, 00:20:08.007 "reset": true, 00:20:08.007 "nvme_admin": false, 00:20:08.007 "nvme_io": false, 00:20:08.007 "nvme_io_md": false, 00:20:08.007 "write_zeroes": true, 00:20:08.007 "zcopy": true, 00:20:08.007 "get_zone_info": false, 00:20:08.007 "zone_management": false, 00:20:08.007 "zone_append": false, 00:20:08.007 "compare": false, 00:20:08.007 "compare_and_write": false, 00:20:08.007 "abort": true, 00:20:08.007 "seek_hole": false, 00:20:08.007 "seek_data": false, 00:20:08.007 "copy": true, 00:20:08.007 "nvme_iov_md": false 00:20:08.007 }, 00:20:08.007 "memory_domains": [ 00:20:08.007 { 00:20:08.007 "dma_device_id": "system", 00:20:08.007 "dma_device_type": 1 00:20:08.007 }, 00:20:08.007 { 00:20:08.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.007 "dma_device_type": 2 00:20:08.007 } 00:20:08.007 ], 00:20:08.007 "driver_specific": {} 00:20:08.007 } 00:20:08.007 ] 00:20:08.007 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.007 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:20:08.007 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:08.007 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.008 "name": "Existed_Raid", 00:20:08.008 "uuid": "84818ba8-38b1-4c60-bed9-0f8de4f6fbd4", 00:20:08.008 "strip_size_kb": 0, 00:20:08.008 "state": "configuring", 00:20:08.008 "raid_level": "raid1", 00:20:08.008 "superblock": true, 00:20:08.008 "num_base_bdevs": 2, 00:20:08.008 "num_base_bdevs_discovered": 1, 00:20:08.008 "num_base_bdevs_operational": 2, 00:20:08.008 "base_bdevs_list": [ 00:20:08.008 { 00:20:08.008 "name": "BaseBdev1", 00:20:08.008 "uuid": "4b6bd692-ea72-4ec6-8741-b4e24daeb188", 00:20:08.008 "is_configured": true, 00:20:08.008 "data_offset": 256, 00:20:08.008 "data_size": 7936 00:20:08.008 }, 00:20:08.008 { 00:20:08.008 "name": "BaseBdev2", 00:20:08.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.008 "is_configured": false, 00:20:08.008 "data_offset": 0, 00:20:08.008 "data_size": 0 00:20:08.008 } 00:20:08.008 ] 00:20:08.008 }' 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.008 11:31:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.576 [2024-11-15 11:31:51.225519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:08.576 [2024-11-15 11:31:51.225616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.576 [2024-11-15 11:31:51.233590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.576 [2024-11-15 11:31:51.236319] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:08.576 [2024-11-15 11:31:51.236373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.576 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.577 "name": "Existed_Raid", 00:20:08.577 "uuid": "b16bd249-8326-4389-adfb-e47581e05e47", 00:20:08.577 "strip_size_kb": 0, 00:20:08.577 "state": "configuring", 00:20:08.577 "raid_level": "raid1", 00:20:08.577 "superblock": true, 00:20:08.577 "num_base_bdevs": 2, 00:20:08.577 "num_base_bdevs_discovered": 1, 00:20:08.577 "num_base_bdevs_operational": 2, 00:20:08.577 "base_bdevs_list": [ 00:20:08.577 { 00:20:08.577 "name": "BaseBdev1", 00:20:08.577 "uuid": "4b6bd692-ea72-4ec6-8741-b4e24daeb188", 00:20:08.577 "is_configured": true, 00:20:08.577 "data_offset": 256, 00:20:08.577 "data_size": 7936 00:20:08.577 }, 00:20:08.577 { 00:20:08.577 "name": "BaseBdev2", 00:20:08.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.577 "is_configured": false, 00:20:08.577 "data_offset": 0, 00:20:08.577 "data_size": 0 00:20:08.577 } 00:20:08.577 ] 00:20:08.577 }' 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.577 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.836 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:08.836 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.836 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.095 [2024-11-15 11:31:51.787016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:09.095 [2024-11-15 11:31:51.787563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:09.095 BaseBdev2 00:20:09.095 [2024-11-15 11:31:51.787710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:09.095 [2024-11-15 11:31:51.787838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:09.095 [2024-11-15 11:31:51.787951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:09.096 [2024-11-15 11:31:51.787972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:09.096 [2024-11-15 11:31:51.788069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.096 [ 00:20:09.096 { 00:20:09.096 "name": "BaseBdev2", 00:20:09.096 "aliases": [ 00:20:09.096 "530e993f-041a-4661-b924-8d87c4699a4d" 00:20:09.096 ], 00:20:09.096 "product_name": "Malloc disk", 00:20:09.096 "block_size": 4128, 00:20:09.096 "num_blocks": 8192, 00:20:09.096 "uuid": "530e993f-041a-4661-b924-8d87c4699a4d", 00:20:09.096 "md_size": 32, 00:20:09.096 "md_interleave": true, 00:20:09.096 "dif_type": 0, 00:20:09.096 "assigned_rate_limits": { 00:20:09.096 "rw_ios_per_sec": 0, 00:20:09.096 "rw_mbytes_per_sec": 0, 00:20:09.096 "r_mbytes_per_sec": 0, 00:20:09.096 "w_mbytes_per_sec": 0 00:20:09.096 }, 00:20:09.096 "claimed": true, 00:20:09.096 "claim_type": "exclusive_write", 00:20:09.096 "zoned": false, 00:20:09.096 "supported_io_types": { 00:20:09.096 "read": true, 00:20:09.096 "write": true, 00:20:09.096 "unmap": true, 00:20:09.096 "flush": true, 00:20:09.096 "reset": true, 00:20:09.096 "nvme_admin": false, 00:20:09.096 "nvme_io": false, 00:20:09.096 "nvme_io_md": false, 00:20:09.096 "write_zeroes": true, 00:20:09.096 "zcopy": true, 00:20:09.096 "get_zone_info": false, 00:20:09.096 "zone_management": false, 00:20:09.096 "zone_append": false, 00:20:09.096 "compare": false, 00:20:09.096 "compare_and_write": false, 00:20:09.096 "abort": true, 00:20:09.096 "seek_hole": false, 00:20:09.096 "seek_data": false, 00:20:09.096 "copy": true, 00:20:09.096 "nvme_iov_md": false 00:20:09.096 }, 00:20:09.096 "memory_domains": [ 00:20:09.096 { 00:20:09.096 "dma_device_id": "system", 00:20:09.096 "dma_device_type": 1 00:20:09.096 }, 00:20:09.096 { 00:20:09.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.096 "dma_device_type": 2 00:20:09.096 } 00:20:09.096 ], 00:20:09.096 "driver_specific": {} 00:20:09.096 } 00:20:09.096 ] 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.096 "name": "Existed_Raid", 00:20:09.096 "uuid": "b16bd249-8326-4389-adfb-e47581e05e47", 00:20:09.096 "strip_size_kb": 0, 00:20:09.096 "state": "online", 00:20:09.096 "raid_level": "raid1", 00:20:09.096 "superblock": true, 00:20:09.096 "num_base_bdevs": 2, 00:20:09.096 "num_base_bdevs_discovered": 2, 00:20:09.096 "num_base_bdevs_operational": 2, 00:20:09.096 "base_bdevs_list": [ 00:20:09.096 { 00:20:09.096 "name": "BaseBdev1", 00:20:09.096 "uuid": "4b6bd692-ea72-4ec6-8741-b4e24daeb188", 00:20:09.096 "is_configured": true, 00:20:09.096 "data_offset": 256, 00:20:09.096 "data_size": 7936 00:20:09.096 }, 00:20:09.096 { 00:20:09.096 "name": "BaseBdev2", 00:20:09.096 "uuid": "530e993f-041a-4661-b924-8d87c4699a4d", 00:20:09.096 "is_configured": true, 00:20:09.096 "data_offset": 256, 00:20:09.096 "data_size": 7936 00:20:09.096 } 00:20:09.096 ] 00:20:09.096 }' 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.096 11:31:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:09.772 [2024-11-15 11:31:52.315646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.772 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:09.772 "name": "Existed_Raid", 00:20:09.772 "aliases": [ 00:20:09.772 "b16bd249-8326-4389-adfb-e47581e05e47" 00:20:09.772 ], 00:20:09.772 "product_name": "Raid Volume", 00:20:09.772 "block_size": 4128, 00:20:09.772 "num_blocks": 7936, 00:20:09.772 "uuid": "b16bd249-8326-4389-adfb-e47581e05e47", 00:20:09.772 "md_size": 32, 00:20:09.772 "md_interleave": true, 00:20:09.772 "dif_type": 0, 00:20:09.772 "assigned_rate_limits": { 00:20:09.772 "rw_ios_per_sec": 0, 00:20:09.772 "rw_mbytes_per_sec": 0, 00:20:09.772 "r_mbytes_per_sec": 0, 00:20:09.772 "w_mbytes_per_sec": 0 00:20:09.772 }, 00:20:09.772 "claimed": false, 00:20:09.772 "zoned": false, 00:20:09.772 "supported_io_types": { 00:20:09.772 "read": true, 00:20:09.772 "write": true, 00:20:09.772 "unmap": false, 00:20:09.772 "flush": false, 00:20:09.772 "reset": true, 00:20:09.773 "nvme_admin": false, 00:20:09.773 "nvme_io": false, 00:20:09.773 "nvme_io_md": false, 00:20:09.773 "write_zeroes": true, 00:20:09.773 "zcopy": false, 00:20:09.773 "get_zone_info": false, 00:20:09.773 "zone_management": false, 00:20:09.773 "zone_append": false, 00:20:09.773 "compare": false, 00:20:09.773 "compare_and_write": false, 00:20:09.773 "abort": false, 00:20:09.773 "seek_hole": false, 00:20:09.773 "seek_data": false, 00:20:09.773 "copy": false, 00:20:09.773 "nvme_iov_md": false 00:20:09.773 }, 00:20:09.773 "memory_domains": [ 00:20:09.773 { 00:20:09.773 "dma_device_id": "system", 00:20:09.773 "dma_device_type": 1 00:20:09.773 }, 00:20:09.773 { 00:20:09.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.773 "dma_device_type": 2 00:20:09.773 }, 00:20:09.773 { 00:20:09.773 "dma_device_id": "system", 00:20:09.773 "dma_device_type": 1 00:20:09.773 }, 00:20:09.773 { 00:20:09.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.773 "dma_device_type": 2 00:20:09.773 } 00:20:09.773 ], 00:20:09.773 "driver_specific": { 00:20:09.773 "raid": { 00:20:09.773 "uuid": "b16bd249-8326-4389-adfb-e47581e05e47", 00:20:09.773 "strip_size_kb": 0, 00:20:09.773 "state": "online", 00:20:09.773 "raid_level": "raid1", 00:20:09.773 "superblock": true, 00:20:09.773 "num_base_bdevs": 2, 00:20:09.773 "num_base_bdevs_discovered": 2, 00:20:09.773 "num_base_bdevs_operational": 2, 00:20:09.773 "base_bdevs_list": [ 00:20:09.773 { 00:20:09.773 "name": "BaseBdev1", 00:20:09.773 "uuid": "4b6bd692-ea72-4ec6-8741-b4e24daeb188", 00:20:09.773 "is_configured": true, 00:20:09.773 "data_offset": 256, 00:20:09.773 "data_size": 7936 00:20:09.773 }, 00:20:09.773 { 00:20:09.773 "name": "BaseBdev2", 00:20:09.773 "uuid": "530e993f-041a-4661-b924-8d87c4699a4d", 00:20:09.773 "is_configured": true, 00:20:09.773 "data_offset": 256, 00:20:09.773 "data_size": 7936 00:20:09.773 } 00:20:09.773 ] 00:20:09.773 } 00:20:09.773 } 00:20:09.773 }' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:09.773 BaseBdev2' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.773 [2024-11-15 11:31:52.571406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.773 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.041 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.041 "name": "Existed_Raid", 00:20:10.041 "uuid": "b16bd249-8326-4389-adfb-e47581e05e47", 00:20:10.041 "strip_size_kb": 0, 00:20:10.041 "state": "online", 00:20:10.041 "raid_level": "raid1", 00:20:10.041 "superblock": true, 00:20:10.041 "num_base_bdevs": 2, 00:20:10.041 "num_base_bdevs_discovered": 1, 00:20:10.041 "num_base_bdevs_operational": 1, 00:20:10.041 "base_bdevs_list": [ 00:20:10.041 { 00:20:10.041 "name": null, 00:20:10.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.041 "is_configured": false, 00:20:10.041 "data_offset": 0, 00:20:10.041 "data_size": 7936 00:20:10.041 }, 00:20:10.041 { 00:20:10.041 "name": "BaseBdev2", 00:20:10.041 "uuid": "530e993f-041a-4661-b924-8d87c4699a4d", 00:20:10.041 "is_configured": true, 00:20:10.041 "data_offset": 256, 00:20:10.041 "data_size": 7936 00:20:10.041 } 00:20:10.041 ] 00:20:10.041 }' 00:20:10.041 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.041 11:31:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.327 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.327 [2024-11-15 11:31:53.213846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:10.327 [2024-11-15 11:31:53.213984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.587 [2024-11-15 11:31:53.294791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.587 [2024-11-15 11:31:53.295117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.587 [2024-11-15 11:31:53.295315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88801 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88801 ']' 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88801 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88801 00:20:10.587 killing process with pid 88801 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88801' 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88801 00:20:10.587 [2024-11-15 11:31:53.376702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:10.587 11:31:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88801 00:20:10.587 [2024-11-15 11:31:53.391723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:11.524 11:31:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:11.524 00:20:11.524 real 0m5.452s 00:20:11.524 user 0m8.070s 00:20:11.524 sys 0m0.880s 00:20:11.524 11:31:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:11.524 ************************************ 00:20:11.524 END TEST raid_state_function_test_sb_md_interleaved 00:20:11.524 ************************************ 00:20:11.524 11:31:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.784 11:31:54 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:11.784 11:31:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:11.784 11:31:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:11.784 11:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.784 ************************************ 00:20:11.784 START TEST raid_superblock_test_md_interleaved 00:20:11.784 ************************************ 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89050 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89050 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89050 ']' 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:11.784 11:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.784 [2024-11-15 11:31:54.671036] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:11.784 [2024-11-15 11:31:54.671514] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89050 ] 00:20:12.043 [2024-11-15 11:31:54.854565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.302 [2024-11-15 11:31:55.013005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.562 [2024-11-15 11:31:55.252135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.562 [2024-11-15 11:31:55.252189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.822 malloc1 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.822 [2024-11-15 11:31:55.720192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:12.822 [2024-11-15 11:31:55.720527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.822 [2024-11-15 11:31:55.720720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:12.822 [2024-11-15 11:31:55.720869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.822 [2024-11-15 11:31:55.723890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.822 [2024-11-15 11:31:55.724059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:12.822 pt1 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.822 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.081 malloc2 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.081 [2024-11-15 11:31:55.780607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.081 [2024-11-15 11:31:55.780701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.081 [2024-11-15 11:31:55.780736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:13.081 [2024-11-15 11:31:55.780751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.081 [2024-11-15 11:31:55.783514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.081 [2024-11-15 11:31:55.783555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.081 pt2 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.081 [2024-11-15 11:31:55.792651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:13.081 [2024-11-15 11:31:55.795319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.081 [2024-11-15 11:31:55.795604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:13.081 [2024-11-15 11:31:55.795651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:13.081 [2024-11-15 11:31:55.795769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:13.081 [2024-11-15 11:31:55.795903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:13.081 [2024-11-15 11:31:55.795922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:13.081 [2024-11-15 11:31:55.796034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.081 "name": "raid_bdev1", 00:20:13.081 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:13.081 "strip_size_kb": 0, 00:20:13.081 "state": "online", 00:20:13.081 "raid_level": "raid1", 00:20:13.081 "superblock": true, 00:20:13.081 "num_base_bdevs": 2, 00:20:13.081 "num_base_bdevs_discovered": 2, 00:20:13.081 "num_base_bdevs_operational": 2, 00:20:13.081 "base_bdevs_list": [ 00:20:13.081 { 00:20:13.081 "name": "pt1", 00:20:13.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:13.081 "is_configured": true, 00:20:13.081 "data_offset": 256, 00:20:13.081 "data_size": 7936 00:20:13.081 }, 00:20:13.081 { 00:20:13.081 "name": "pt2", 00:20:13.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.081 "is_configured": true, 00:20:13.081 "data_offset": 256, 00:20:13.081 "data_size": 7936 00:20:13.081 } 00:20:13.081 ] 00:20:13.081 }' 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.081 11:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.649 [2024-11-15 11:31:56.329265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:13.649 "name": "raid_bdev1", 00:20:13.649 "aliases": [ 00:20:13.649 "da43dcb2-3bf1-453e-9585-c653ac88a088" 00:20:13.649 ], 00:20:13.649 "product_name": "Raid Volume", 00:20:13.649 "block_size": 4128, 00:20:13.649 "num_blocks": 7936, 00:20:13.649 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:13.649 "md_size": 32, 00:20:13.649 "md_interleave": true, 00:20:13.649 "dif_type": 0, 00:20:13.649 "assigned_rate_limits": { 00:20:13.649 "rw_ios_per_sec": 0, 00:20:13.649 "rw_mbytes_per_sec": 0, 00:20:13.649 "r_mbytes_per_sec": 0, 00:20:13.649 "w_mbytes_per_sec": 0 00:20:13.649 }, 00:20:13.649 "claimed": false, 00:20:13.649 "zoned": false, 00:20:13.649 "supported_io_types": { 00:20:13.649 "read": true, 00:20:13.649 "write": true, 00:20:13.649 "unmap": false, 00:20:13.649 "flush": false, 00:20:13.649 "reset": true, 00:20:13.649 "nvme_admin": false, 00:20:13.649 "nvme_io": false, 00:20:13.649 "nvme_io_md": false, 00:20:13.649 "write_zeroes": true, 00:20:13.649 "zcopy": false, 00:20:13.649 "get_zone_info": false, 00:20:13.649 "zone_management": false, 00:20:13.649 "zone_append": false, 00:20:13.649 "compare": false, 00:20:13.649 "compare_and_write": false, 00:20:13.649 "abort": false, 00:20:13.649 "seek_hole": false, 00:20:13.649 "seek_data": false, 00:20:13.649 "copy": false, 00:20:13.649 "nvme_iov_md": false 00:20:13.649 }, 00:20:13.649 "memory_domains": [ 00:20:13.649 { 00:20:13.649 "dma_device_id": "system", 00:20:13.649 "dma_device_type": 1 00:20:13.649 }, 00:20:13.649 { 00:20:13.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.649 "dma_device_type": 2 00:20:13.649 }, 00:20:13.649 { 00:20:13.649 "dma_device_id": "system", 00:20:13.649 "dma_device_type": 1 00:20:13.649 }, 00:20:13.649 { 00:20:13.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.649 "dma_device_type": 2 00:20:13.649 } 00:20:13.649 ], 00:20:13.649 "driver_specific": { 00:20:13.649 "raid": { 00:20:13.649 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:13.649 "strip_size_kb": 0, 00:20:13.649 "state": "online", 00:20:13.649 "raid_level": "raid1", 00:20:13.649 "superblock": true, 00:20:13.649 "num_base_bdevs": 2, 00:20:13.649 "num_base_bdevs_discovered": 2, 00:20:13.649 "num_base_bdevs_operational": 2, 00:20:13.649 "base_bdevs_list": [ 00:20:13.649 { 00:20:13.649 "name": "pt1", 00:20:13.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:13.649 "is_configured": true, 00:20:13.649 "data_offset": 256, 00:20:13.649 "data_size": 7936 00:20:13.649 }, 00:20:13.649 { 00:20:13.649 "name": "pt2", 00:20:13.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.649 "is_configured": true, 00:20:13.649 "data_offset": 256, 00:20:13.649 "data_size": 7936 00:20:13.649 } 00:20:13.649 ] 00:20:13.649 } 00:20:13.649 } 00:20:13.649 }' 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:13.649 pt2' 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.649 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.650 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.650 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.650 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:13.650 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:13.650 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:13.650 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.650 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.650 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:13.909 [2024-11-15 11:31:56.601400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=da43dcb2-3bf1-453e-9585-c653ac88a088 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z da43dcb2-3bf1-453e-9585-c653ac88a088 ']' 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.909 [2024-11-15 11:31:56.652943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.909 [2024-11-15 11:31:56.653132] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.909 [2024-11-15 11:31:56.653441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.909 [2024-11-15 11:31:56.653541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.909 [2024-11-15 11:31:56.653565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.909 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.910 [2024-11-15 11:31:56.796984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:13.910 [2024-11-15 11:31:56.800085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:13.910 [2024-11-15 11:31:56.800212] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:13.910 [2024-11-15 11:31:56.800298] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:13.910 [2024-11-15 11:31:56.800325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.910 [2024-11-15 11:31:56.800341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:13.910 request: 00:20:13.910 { 00:20:13.910 "name": "raid_bdev1", 00:20:13.910 "raid_level": "raid1", 00:20:13.910 "base_bdevs": [ 00:20:13.910 "malloc1", 00:20:13.910 "malloc2" 00:20:13.910 ], 00:20:13.910 "superblock": false, 00:20:13.910 "method": "bdev_raid_create", 00:20:13.910 "req_id": 1 00:20:13.910 } 00:20:13.910 Got JSON-RPC error response 00:20:13.910 response: 00:20:13.910 { 00:20:13.910 "code": -17, 00:20:13.910 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:13.910 } 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:13.910 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.169 [2024-11-15 11:31:56.889091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:14.169 [2024-11-15 11:31:56.889242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.169 [2024-11-15 11:31:56.889280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:14.169 [2024-11-15 11:31:56.889299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.169 [2024-11-15 11:31:56.892334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.169 [2024-11-15 11:31:56.892414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:14.169 [2024-11-15 11:31:56.892496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:14.169 [2024-11-15 11:31:56.892619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:14.169 pt1 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.169 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.169 "name": "raid_bdev1", 00:20:14.169 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:14.169 "strip_size_kb": 0, 00:20:14.169 "state": "configuring", 00:20:14.169 "raid_level": "raid1", 00:20:14.169 "superblock": true, 00:20:14.169 "num_base_bdevs": 2, 00:20:14.169 "num_base_bdevs_discovered": 1, 00:20:14.169 "num_base_bdevs_operational": 2, 00:20:14.169 "base_bdevs_list": [ 00:20:14.169 { 00:20:14.169 "name": "pt1", 00:20:14.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:14.169 "is_configured": true, 00:20:14.169 "data_offset": 256, 00:20:14.170 "data_size": 7936 00:20:14.170 }, 00:20:14.170 { 00:20:14.170 "name": null, 00:20:14.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:14.170 "is_configured": false, 00:20:14.170 "data_offset": 256, 00:20:14.170 "data_size": 7936 00:20:14.170 } 00:20:14.170 ] 00:20:14.170 }' 00:20:14.170 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.170 11:31:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.738 [2024-11-15 11:31:57.441231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:14.738 [2024-11-15 11:31:57.441392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.738 [2024-11-15 11:31:57.441446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:14.738 [2024-11-15 11:31:57.441466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.738 [2024-11-15 11:31:57.441856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.738 [2024-11-15 11:31:57.441893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:14.738 [2024-11-15 11:31:57.441975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:14.738 [2024-11-15 11:31:57.442011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:14.738 [2024-11-15 11:31:57.442164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:14.738 [2024-11-15 11:31:57.442243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:14.738 [2024-11-15 11:31:57.442341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:14.738 [2024-11-15 11:31:57.442452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:14.738 [2024-11-15 11:31:57.442467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:14.738 [2024-11-15 11:31:57.442598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.738 pt2 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.738 "name": "raid_bdev1", 00:20:14.738 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:14.738 "strip_size_kb": 0, 00:20:14.738 "state": "online", 00:20:14.738 "raid_level": "raid1", 00:20:14.738 "superblock": true, 00:20:14.738 "num_base_bdevs": 2, 00:20:14.738 "num_base_bdevs_discovered": 2, 00:20:14.738 "num_base_bdevs_operational": 2, 00:20:14.738 "base_bdevs_list": [ 00:20:14.738 { 00:20:14.738 "name": "pt1", 00:20:14.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:14.738 "is_configured": true, 00:20:14.738 "data_offset": 256, 00:20:14.738 "data_size": 7936 00:20:14.738 }, 00:20:14.738 { 00:20:14.738 "name": "pt2", 00:20:14.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:14.738 "is_configured": true, 00:20:14.738 "data_offset": 256, 00:20:14.738 "data_size": 7936 00:20:14.738 } 00:20:14.738 ] 00:20:14.738 }' 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.738 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.306 11:31:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.306 [2024-11-15 11:31:58.005933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.306 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.306 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.306 "name": "raid_bdev1", 00:20:15.306 "aliases": [ 00:20:15.306 "da43dcb2-3bf1-453e-9585-c653ac88a088" 00:20:15.306 ], 00:20:15.306 "product_name": "Raid Volume", 00:20:15.306 "block_size": 4128, 00:20:15.306 "num_blocks": 7936, 00:20:15.306 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:15.306 "md_size": 32, 00:20:15.306 "md_interleave": true, 00:20:15.306 "dif_type": 0, 00:20:15.306 "assigned_rate_limits": { 00:20:15.306 "rw_ios_per_sec": 0, 00:20:15.306 "rw_mbytes_per_sec": 0, 00:20:15.306 "r_mbytes_per_sec": 0, 00:20:15.306 "w_mbytes_per_sec": 0 00:20:15.306 }, 00:20:15.306 "claimed": false, 00:20:15.306 "zoned": false, 00:20:15.306 "supported_io_types": { 00:20:15.306 "read": true, 00:20:15.306 "write": true, 00:20:15.306 "unmap": false, 00:20:15.306 "flush": false, 00:20:15.306 "reset": true, 00:20:15.306 "nvme_admin": false, 00:20:15.306 "nvme_io": false, 00:20:15.306 "nvme_io_md": false, 00:20:15.306 "write_zeroes": true, 00:20:15.306 "zcopy": false, 00:20:15.306 "get_zone_info": false, 00:20:15.306 "zone_management": false, 00:20:15.306 "zone_append": false, 00:20:15.306 "compare": false, 00:20:15.306 "compare_and_write": false, 00:20:15.306 "abort": false, 00:20:15.306 "seek_hole": false, 00:20:15.306 "seek_data": false, 00:20:15.306 "copy": false, 00:20:15.306 "nvme_iov_md": false 00:20:15.306 }, 00:20:15.306 "memory_domains": [ 00:20:15.306 { 00:20:15.306 "dma_device_id": "system", 00:20:15.306 "dma_device_type": 1 00:20:15.306 }, 00:20:15.306 { 00:20:15.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.306 "dma_device_type": 2 00:20:15.306 }, 00:20:15.306 { 00:20:15.306 "dma_device_id": "system", 00:20:15.306 "dma_device_type": 1 00:20:15.306 }, 00:20:15.306 { 00:20:15.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.306 "dma_device_type": 2 00:20:15.306 } 00:20:15.306 ], 00:20:15.306 "driver_specific": { 00:20:15.306 "raid": { 00:20:15.306 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:15.306 "strip_size_kb": 0, 00:20:15.306 "state": "online", 00:20:15.306 "raid_level": "raid1", 00:20:15.306 "superblock": true, 00:20:15.306 "num_base_bdevs": 2, 00:20:15.306 "num_base_bdevs_discovered": 2, 00:20:15.306 "num_base_bdevs_operational": 2, 00:20:15.306 "base_bdevs_list": [ 00:20:15.306 { 00:20:15.306 "name": "pt1", 00:20:15.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:15.306 "is_configured": true, 00:20:15.306 "data_offset": 256, 00:20:15.306 "data_size": 7936 00:20:15.306 }, 00:20:15.306 { 00:20:15.307 "name": "pt2", 00:20:15.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.307 "is_configured": true, 00:20:15.307 "data_offset": 256, 00:20:15.307 "data_size": 7936 00:20:15.307 } 00:20:15.307 ] 00:20:15.307 } 00:20:15.307 } 00:20:15.307 }' 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:15.307 pt2' 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.307 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:15.566 [2024-11-15 11:31:58.286009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' da43dcb2-3bf1-453e-9585-c653ac88a088 '!=' da43dcb2-3bf1-453e-9585-c653ac88a088 ']' 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.566 [2024-11-15 11:31:58.341668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.566 "name": "raid_bdev1", 00:20:15.566 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:15.566 "strip_size_kb": 0, 00:20:15.566 "state": "online", 00:20:15.566 "raid_level": "raid1", 00:20:15.566 "superblock": true, 00:20:15.566 "num_base_bdevs": 2, 00:20:15.566 "num_base_bdevs_discovered": 1, 00:20:15.566 "num_base_bdevs_operational": 1, 00:20:15.566 "base_bdevs_list": [ 00:20:15.566 { 00:20:15.566 "name": null, 00:20:15.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.566 "is_configured": false, 00:20:15.566 "data_offset": 0, 00:20:15.566 "data_size": 7936 00:20:15.566 }, 00:20:15.566 { 00:20:15.566 "name": "pt2", 00:20:15.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.566 "is_configured": true, 00:20:15.566 "data_offset": 256, 00:20:15.566 "data_size": 7936 00:20:15.566 } 00:20:15.566 ] 00:20:15.566 }' 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.566 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.133 [2024-11-15 11:31:58.889867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:16.133 [2024-11-15 11:31:58.889920] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:16.133 [2024-11-15 11:31:58.890042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:16.133 [2024-11-15 11:31:58.890111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:16.133 [2024-11-15 11:31:58.890138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.133 [2024-11-15 11:31:58.969880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:16.133 [2024-11-15 11:31:58.969955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.133 [2024-11-15 11:31:58.969982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:16.133 [2024-11-15 11:31:58.970000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.133 [2024-11-15 11:31:58.973030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.133 [2024-11-15 11:31:58.973094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:16.133 [2024-11-15 11:31:58.973169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:16.133 [2024-11-15 11:31:58.973296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:16.133 [2024-11-15 11:31:58.973404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:16.133 [2024-11-15 11:31:58.973435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:16.133 [2024-11-15 11:31:58.973555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:16.133 [2024-11-15 11:31:58.973674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:16.133 [2024-11-15 11:31:58.973695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:16.133 [2024-11-15 11:31:58.973896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.133 pt2 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.133 11:31:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.133 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.133 "name": "raid_bdev1", 00:20:16.133 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:16.133 "strip_size_kb": 0, 00:20:16.133 "state": "online", 00:20:16.133 "raid_level": "raid1", 00:20:16.133 "superblock": true, 00:20:16.133 "num_base_bdevs": 2, 00:20:16.133 "num_base_bdevs_discovered": 1, 00:20:16.133 "num_base_bdevs_operational": 1, 00:20:16.133 "base_bdevs_list": [ 00:20:16.133 { 00:20:16.133 "name": null, 00:20:16.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.133 "is_configured": false, 00:20:16.133 "data_offset": 256, 00:20:16.133 "data_size": 7936 00:20:16.133 }, 00:20:16.134 { 00:20:16.134 "name": "pt2", 00:20:16.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:16.134 "is_configured": true, 00:20:16.134 "data_offset": 256, 00:20:16.134 "data_size": 7936 00:20:16.134 } 00:20:16.134 ] 00:20:16.134 }' 00:20:16.134 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.134 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.748 [2024-11-15 11:31:59.522090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:16.748 [2024-11-15 11:31:59.522144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:16.748 [2024-11-15 11:31:59.522320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:16.748 [2024-11-15 11:31:59.522406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:16.748 [2024-11-15 11:31:59.522432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.748 [2024-11-15 11:31:59.590168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:16.748 [2024-11-15 11:31:59.590309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.748 [2024-11-15 11:31:59.590344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:16.748 [2024-11-15 11:31:59.590361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.748 [2024-11-15 11:31:59.593311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.748 [2024-11-15 11:31:59.593359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:16.748 [2024-11-15 11:31:59.593448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:16.748 [2024-11-15 11:31:59.593519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:16.748 [2024-11-15 11:31:59.593691] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:16.748 [2024-11-15 11:31:59.593709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:16.748 [2024-11-15 11:31:59.593737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:16.748 [2024-11-15 11:31:59.593809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:16.748 [2024-11-15 11:31:59.593961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:16.748 [2024-11-15 11:31:59.593977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:16.748 [2024-11-15 11:31:59.594080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:16.748 [2024-11-15 11:31:59.594456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:16.748 [2024-11-15 11:31:59.594486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:16.748 [2024-11-15 11:31:59.594701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.748 pt1 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.748 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.749 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.749 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.749 "name": "raid_bdev1", 00:20:16.749 "uuid": "da43dcb2-3bf1-453e-9585-c653ac88a088", 00:20:16.749 "strip_size_kb": 0, 00:20:16.749 "state": "online", 00:20:16.749 "raid_level": "raid1", 00:20:16.749 "superblock": true, 00:20:16.749 "num_base_bdevs": 2, 00:20:16.749 "num_base_bdevs_discovered": 1, 00:20:16.749 "num_base_bdevs_operational": 1, 00:20:16.749 "base_bdevs_list": [ 00:20:16.749 { 00:20:16.749 "name": null, 00:20:16.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.749 "is_configured": false, 00:20:16.749 "data_offset": 256, 00:20:16.749 "data_size": 7936 00:20:16.749 }, 00:20:16.749 { 00:20:16.749 "name": "pt2", 00:20:16.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:16.749 "is_configured": true, 00:20:16.749 "data_offset": 256, 00:20:16.749 "data_size": 7936 00:20:16.749 } 00:20:16.749 ] 00:20:16.749 }' 00:20:16.749 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.749 11:31:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.316 [2024-11-15 11:32:00.191194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' da43dcb2-3bf1-453e-9585-c653ac88a088 '!=' da43dcb2-3bf1-453e-9585-c653ac88a088 ']' 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89050 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89050 ']' 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89050 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:17.316 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89050 00:20:17.573 killing process with pid 89050 00:20:17.573 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:17.573 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:17.573 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89050' 00:20:17.573 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89050 00:20:17.573 [2024-11-15 11:32:00.278269] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:17.573 11:32:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89050 00:20:17.573 [2024-11-15 11:32:00.278411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.573 [2024-11-15 11:32:00.278487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.573 [2024-11-15 11:32:00.278513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:17.573 [2024-11-15 11:32:00.470151] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:18.949 11:32:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:18.949 00:20:18.949 real 0m6.986s 00:20:18.949 user 0m10.990s 00:20:18.949 sys 0m1.102s 00:20:18.949 ************************************ 00:20:18.949 END TEST raid_superblock_test_md_interleaved 00:20:18.949 ************************************ 00:20:18.949 11:32:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:18.949 11:32:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.949 11:32:01 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:18.949 11:32:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:18.949 11:32:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:18.949 11:32:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:18.949 ************************************ 00:20:18.949 START TEST raid_rebuild_test_sb_md_interleaved 00:20:18.949 ************************************ 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89385 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89385 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89385 ']' 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:18.949 11:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.949 [2024-11-15 11:32:01.701260] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:18.949 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:18.949 Zero copy mechanism will not be used. 00:20:18.949 [2024-11-15 11:32:01.701713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89385 ] 00:20:18.949 [2024-11-15 11:32:01.889644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.208 [2024-11-15 11:32:02.035899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.467 [2024-11-15 11:32:02.259970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:19.467 [2024-11-15 11:32:02.260066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.035 BaseBdev1_malloc 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.035 [2024-11-15 11:32:02.737950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:20.035 [2024-11-15 11:32:02.738633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.035 [2024-11-15 11:32:02.738717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:20.035 [2024-11-15 11:32:02.738743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.035 [2024-11-15 11:32:02.741808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.035 [2024-11-15 11:32:02.741849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:20.035 BaseBdev1 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.035 BaseBdev2_malloc 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.035 [2024-11-15 11:32:02.795046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:20.035 [2024-11-15 11:32:02.795147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.035 [2024-11-15 11:32:02.795179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:20.035 [2024-11-15 11:32:02.795230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.035 [2024-11-15 11:32:02.797883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.035 [2024-11-15 11:32:02.797944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:20.035 BaseBdev2 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.035 spare_malloc 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.035 spare_delay 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.035 [2024-11-15 11:32:02.869929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:20.035 [2024-11-15 11:32:02.870036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.035 [2024-11-15 11:32:02.870067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:20.035 [2024-11-15 11:32:02.870086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.035 [2024-11-15 11:32:02.872804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.035 [2024-11-15 11:32:02.872866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:20.035 spare 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.035 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.036 [2024-11-15 11:32:02.877967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:20.036 [2024-11-15 11:32:02.880659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.036 [2024-11-15 11:32:02.880931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:20.036 [2024-11-15 11:32:02.880956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:20.036 [2024-11-15 11:32:02.881044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:20.036 [2024-11-15 11:32:02.881139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:20.036 [2024-11-15 11:32:02.881153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:20.036 [2024-11-15 11:32:02.881316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.036 "name": "raid_bdev1", 00:20:20.036 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:20.036 "strip_size_kb": 0, 00:20:20.036 "state": "online", 00:20:20.036 "raid_level": "raid1", 00:20:20.036 "superblock": true, 00:20:20.036 "num_base_bdevs": 2, 00:20:20.036 "num_base_bdevs_discovered": 2, 00:20:20.036 "num_base_bdevs_operational": 2, 00:20:20.036 "base_bdevs_list": [ 00:20:20.036 { 00:20:20.036 "name": "BaseBdev1", 00:20:20.036 "uuid": "582de551-8218-5353-be14-476e2ef836e4", 00:20:20.036 "is_configured": true, 00:20:20.036 "data_offset": 256, 00:20:20.036 "data_size": 7936 00:20:20.036 }, 00:20:20.036 { 00:20:20.036 "name": "BaseBdev2", 00:20:20.036 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:20.036 "is_configured": true, 00:20:20.036 "data_offset": 256, 00:20:20.036 "data_size": 7936 00:20:20.036 } 00:20:20.036 ] 00:20:20.036 }' 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.036 11:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 [2024-11-15 11:32:03.430440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 [2024-11-15 11:32:03.534038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.605 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.864 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.864 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.864 "name": "raid_bdev1", 00:20:20.864 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:20.864 "strip_size_kb": 0, 00:20:20.864 "state": "online", 00:20:20.864 "raid_level": "raid1", 00:20:20.864 "superblock": true, 00:20:20.864 "num_base_bdevs": 2, 00:20:20.864 "num_base_bdevs_discovered": 1, 00:20:20.864 "num_base_bdevs_operational": 1, 00:20:20.864 "base_bdevs_list": [ 00:20:20.864 { 00:20:20.864 "name": null, 00:20:20.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.864 "is_configured": false, 00:20:20.864 "data_offset": 0, 00:20:20.864 "data_size": 7936 00:20:20.864 }, 00:20:20.864 { 00:20:20.864 "name": "BaseBdev2", 00:20:20.864 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:20.864 "is_configured": true, 00:20:20.864 "data_offset": 256, 00:20:20.864 "data_size": 7936 00:20:20.864 } 00:20:20.864 ] 00:20:20.864 }' 00:20:20.864 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.864 11:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.433 11:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:21.433 11:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.433 11:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.433 [2024-11-15 11:32:04.082261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.433 [2024-11-15 11:32:04.101416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:21.433 11:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.433 11:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:21.433 [2024-11-15 11:32:04.104116] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.368 "name": "raid_bdev1", 00:20:22.368 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:22.368 "strip_size_kb": 0, 00:20:22.368 "state": "online", 00:20:22.368 "raid_level": "raid1", 00:20:22.368 "superblock": true, 00:20:22.368 "num_base_bdevs": 2, 00:20:22.368 "num_base_bdevs_discovered": 2, 00:20:22.368 "num_base_bdevs_operational": 2, 00:20:22.368 "process": { 00:20:22.368 "type": "rebuild", 00:20:22.368 "target": "spare", 00:20:22.368 "progress": { 00:20:22.368 "blocks": 2560, 00:20:22.368 "percent": 32 00:20:22.368 } 00:20:22.368 }, 00:20:22.368 "base_bdevs_list": [ 00:20:22.368 { 00:20:22.368 "name": "spare", 00:20:22.368 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:22.368 "is_configured": true, 00:20:22.368 "data_offset": 256, 00:20:22.368 "data_size": 7936 00:20:22.368 }, 00:20:22.368 { 00:20:22.368 "name": "BaseBdev2", 00:20:22.368 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:22.368 "is_configured": true, 00:20:22.368 "data_offset": 256, 00:20:22.368 "data_size": 7936 00:20:22.368 } 00:20:22.368 ] 00:20:22.368 }' 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.368 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.368 [2024-11-15 11:32:05.281964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.368 [2024-11-15 11:32:05.314918] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:22.368 [2024-11-15 11:32:05.315470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.368 [2024-11-15 11:32:05.315718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.368 [2024-11-15 11:32:05.315785] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.627 "name": "raid_bdev1", 00:20:22.627 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:22.627 "strip_size_kb": 0, 00:20:22.627 "state": "online", 00:20:22.627 "raid_level": "raid1", 00:20:22.627 "superblock": true, 00:20:22.627 "num_base_bdevs": 2, 00:20:22.627 "num_base_bdevs_discovered": 1, 00:20:22.627 "num_base_bdevs_operational": 1, 00:20:22.627 "base_bdevs_list": [ 00:20:22.627 { 00:20:22.627 "name": null, 00:20:22.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.627 "is_configured": false, 00:20:22.627 "data_offset": 0, 00:20:22.627 "data_size": 7936 00:20:22.627 }, 00:20:22.627 { 00:20:22.627 "name": "BaseBdev2", 00:20:22.627 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:22.627 "is_configured": true, 00:20:22.627 "data_offset": 256, 00:20:22.627 "data_size": 7936 00:20:22.627 } 00:20:22.627 ] 00:20:22.627 }' 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.627 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.197 "name": "raid_bdev1", 00:20:23.197 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:23.197 "strip_size_kb": 0, 00:20:23.197 "state": "online", 00:20:23.197 "raid_level": "raid1", 00:20:23.197 "superblock": true, 00:20:23.197 "num_base_bdevs": 2, 00:20:23.197 "num_base_bdevs_discovered": 1, 00:20:23.197 "num_base_bdevs_operational": 1, 00:20:23.197 "base_bdevs_list": [ 00:20:23.197 { 00:20:23.197 "name": null, 00:20:23.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.197 "is_configured": false, 00:20:23.197 "data_offset": 0, 00:20:23.197 "data_size": 7936 00:20:23.197 }, 00:20:23.197 { 00:20:23.197 "name": "BaseBdev2", 00:20:23.197 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:23.197 "is_configured": true, 00:20:23.197 "data_offset": 256, 00:20:23.197 "data_size": 7936 00:20:23.197 } 00:20:23.197 ] 00:20:23.197 }' 00:20:23.197 11:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.197 11:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.197 11:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.197 11:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.197 11:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:23.197 11:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.197 11:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.197 [2024-11-15 11:32:06.075819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:23.197 [2024-11-15 11:32:06.092503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:23.197 11:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.197 11:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:23.197 [2024-11-15 11:32:06.095468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.575 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.575 "name": "raid_bdev1", 00:20:24.576 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:24.576 "strip_size_kb": 0, 00:20:24.576 "state": "online", 00:20:24.576 "raid_level": "raid1", 00:20:24.576 "superblock": true, 00:20:24.576 "num_base_bdevs": 2, 00:20:24.576 "num_base_bdevs_discovered": 2, 00:20:24.576 "num_base_bdevs_operational": 2, 00:20:24.576 "process": { 00:20:24.576 "type": "rebuild", 00:20:24.576 "target": "spare", 00:20:24.576 "progress": { 00:20:24.576 "blocks": 2560, 00:20:24.576 "percent": 32 00:20:24.576 } 00:20:24.576 }, 00:20:24.576 "base_bdevs_list": [ 00:20:24.576 { 00:20:24.576 "name": "spare", 00:20:24.576 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:24.576 "is_configured": true, 00:20:24.576 "data_offset": 256, 00:20:24.576 "data_size": 7936 00:20:24.576 }, 00:20:24.576 { 00:20:24.576 "name": "BaseBdev2", 00:20:24.576 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:24.576 "is_configured": true, 00:20:24.576 "data_offset": 256, 00:20:24.576 "data_size": 7936 00:20:24.576 } 00:20:24.576 ] 00:20:24.576 }' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:24.576 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=804 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.576 "name": "raid_bdev1", 00:20:24.576 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:24.576 "strip_size_kb": 0, 00:20:24.576 "state": "online", 00:20:24.576 "raid_level": "raid1", 00:20:24.576 "superblock": true, 00:20:24.576 "num_base_bdevs": 2, 00:20:24.576 "num_base_bdevs_discovered": 2, 00:20:24.576 "num_base_bdevs_operational": 2, 00:20:24.576 "process": { 00:20:24.576 "type": "rebuild", 00:20:24.576 "target": "spare", 00:20:24.576 "progress": { 00:20:24.576 "blocks": 2816, 00:20:24.576 "percent": 35 00:20:24.576 } 00:20:24.576 }, 00:20:24.576 "base_bdevs_list": [ 00:20:24.576 { 00:20:24.576 "name": "spare", 00:20:24.576 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:24.576 "is_configured": true, 00:20:24.576 "data_offset": 256, 00:20:24.576 "data_size": 7936 00:20:24.576 }, 00:20:24.576 { 00:20:24.576 "name": "BaseBdev2", 00:20:24.576 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:24.576 "is_configured": true, 00:20:24.576 "data_offset": 256, 00:20:24.576 "data_size": 7936 00:20:24.576 } 00:20:24.576 ] 00:20:24.576 }' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.576 11:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:25.512 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:25.512 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.512 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.513 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.513 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.513 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.513 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.513 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.513 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.513 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.513 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.772 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.772 "name": "raid_bdev1", 00:20:25.772 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:25.772 "strip_size_kb": 0, 00:20:25.772 "state": "online", 00:20:25.772 "raid_level": "raid1", 00:20:25.772 "superblock": true, 00:20:25.772 "num_base_bdevs": 2, 00:20:25.772 "num_base_bdevs_discovered": 2, 00:20:25.772 "num_base_bdevs_operational": 2, 00:20:25.772 "process": { 00:20:25.772 "type": "rebuild", 00:20:25.772 "target": "spare", 00:20:25.772 "progress": { 00:20:25.772 "blocks": 5888, 00:20:25.772 "percent": 74 00:20:25.772 } 00:20:25.772 }, 00:20:25.772 "base_bdevs_list": [ 00:20:25.772 { 00:20:25.772 "name": "spare", 00:20:25.772 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:25.772 "is_configured": true, 00:20:25.772 "data_offset": 256, 00:20:25.772 "data_size": 7936 00:20:25.772 }, 00:20:25.772 { 00:20:25.772 "name": "BaseBdev2", 00:20:25.772 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:25.772 "is_configured": true, 00:20:25.772 "data_offset": 256, 00:20:25.772 "data_size": 7936 00:20:25.772 } 00:20:25.772 ] 00:20:25.772 }' 00:20:25.772 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.772 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.772 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.772 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.772 11:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:26.339 [2024-11-15 11:32:09.222904] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:26.340 [2024-11-15 11:32:09.223300] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:26.340 [2024-11-15 11:32:09.223498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.908 "name": "raid_bdev1", 00:20:26.908 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:26.908 "strip_size_kb": 0, 00:20:26.908 "state": "online", 00:20:26.908 "raid_level": "raid1", 00:20:26.908 "superblock": true, 00:20:26.908 "num_base_bdevs": 2, 00:20:26.908 "num_base_bdevs_discovered": 2, 00:20:26.908 "num_base_bdevs_operational": 2, 00:20:26.908 "base_bdevs_list": [ 00:20:26.908 { 00:20:26.908 "name": "spare", 00:20:26.908 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:26.908 "is_configured": true, 00:20:26.908 "data_offset": 256, 00:20:26.908 "data_size": 7936 00:20:26.908 }, 00:20:26.908 { 00:20:26.908 "name": "BaseBdev2", 00:20:26.908 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:26.908 "is_configured": true, 00:20:26.908 "data_offset": 256, 00:20:26.908 "data_size": 7936 00:20:26.908 } 00:20:26.908 ] 00:20:26.908 }' 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:26.908 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.909 "name": "raid_bdev1", 00:20:26.909 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:26.909 "strip_size_kb": 0, 00:20:26.909 "state": "online", 00:20:26.909 "raid_level": "raid1", 00:20:26.909 "superblock": true, 00:20:26.909 "num_base_bdevs": 2, 00:20:26.909 "num_base_bdevs_discovered": 2, 00:20:26.909 "num_base_bdevs_operational": 2, 00:20:26.909 "base_bdevs_list": [ 00:20:26.909 { 00:20:26.909 "name": "spare", 00:20:26.909 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:26.909 "is_configured": true, 00:20:26.909 "data_offset": 256, 00:20:26.909 "data_size": 7936 00:20:26.909 }, 00:20:26.909 { 00:20:26.909 "name": "BaseBdev2", 00:20:26.909 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:26.909 "is_configured": true, 00:20:26.909 "data_offset": 256, 00:20:26.909 "data_size": 7936 00:20:26.909 } 00:20:26.909 ] 00:20:26.909 }' 00:20:26.909 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.168 "name": "raid_bdev1", 00:20:27.168 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:27.168 "strip_size_kb": 0, 00:20:27.168 "state": "online", 00:20:27.168 "raid_level": "raid1", 00:20:27.168 "superblock": true, 00:20:27.168 "num_base_bdevs": 2, 00:20:27.168 "num_base_bdevs_discovered": 2, 00:20:27.168 "num_base_bdevs_operational": 2, 00:20:27.168 "base_bdevs_list": [ 00:20:27.168 { 00:20:27.168 "name": "spare", 00:20:27.168 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:27.168 "is_configured": true, 00:20:27.168 "data_offset": 256, 00:20:27.168 "data_size": 7936 00:20:27.168 }, 00:20:27.168 { 00:20:27.168 "name": "BaseBdev2", 00:20:27.168 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:27.168 "is_configured": true, 00:20:27.168 "data_offset": 256, 00:20:27.168 "data_size": 7936 00:20:27.168 } 00:20:27.168 ] 00:20:27.168 }' 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.168 11:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 [2024-11-15 11:32:10.477724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:27.736 [2024-11-15 11:32:10.477784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:27.736 [2024-11-15 11:32:10.477910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.736 [2024-11-15 11:32:10.478008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.736 [2024-11-15 11:32:10.478025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.736 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 [2024-11-15 11:32:10.553673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.736 [2024-11-15 11:32:10.553761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.736 [2024-11-15 11:32:10.553796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:27.736 [2024-11-15 11:32:10.553811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.736 [2024-11-15 11:32:10.556817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.736 [2024-11-15 11:32:10.556858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.736 [2024-11-15 11:32:10.556955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:27.736 [2024-11-15 11:32:10.557018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.736 [2024-11-15 11:32:10.557162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.736 spare 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.737 [2024-11-15 11:32:10.657315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:27.737 [2024-11-15 11:32:10.657347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:27.737 [2024-11-15 11:32:10.657471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:27.737 [2024-11-15 11:32:10.657576] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:27.737 [2024-11-15 11:32:10.657592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:27.737 [2024-11-15 11:32:10.657730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.737 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.995 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.995 "name": "raid_bdev1", 00:20:27.995 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:27.995 "strip_size_kb": 0, 00:20:27.995 "state": "online", 00:20:27.995 "raid_level": "raid1", 00:20:27.996 "superblock": true, 00:20:27.996 "num_base_bdevs": 2, 00:20:27.996 "num_base_bdevs_discovered": 2, 00:20:27.996 "num_base_bdevs_operational": 2, 00:20:27.996 "base_bdevs_list": [ 00:20:27.996 { 00:20:27.996 "name": "spare", 00:20:27.996 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:27.996 "is_configured": true, 00:20:27.996 "data_offset": 256, 00:20:27.996 "data_size": 7936 00:20:27.996 }, 00:20:27.996 { 00:20:27.996 "name": "BaseBdev2", 00:20:27.996 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:27.996 "is_configured": true, 00:20:27.996 "data_offset": 256, 00:20:27.996 "data_size": 7936 00:20:27.996 } 00:20:27.996 ] 00:20:27.996 }' 00:20:27.996 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.996 11:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.254 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.514 "name": "raid_bdev1", 00:20:28.514 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:28.514 "strip_size_kb": 0, 00:20:28.514 "state": "online", 00:20:28.514 "raid_level": "raid1", 00:20:28.514 "superblock": true, 00:20:28.514 "num_base_bdevs": 2, 00:20:28.514 "num_base_bdevs_discovered": 2, 00:20:28.514 "num_base_bdevs_operational": 2, 00:20:28.514 "base_bdevs_list": [ 00:20:28.514 { 00:20:28.514 "name": "spare", 00:20:28.514 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:28.514 "is_configured": true, 00:20:28.514 "data_offset": 256, 00:20:28.514 "data_size": 7936 00:20:28.514 }, 00:20:28.514 { 00:20:28.514 "name": "BaseBdev2", 00:20:28.514 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:28.514 "is_configured": true, 00:20:28.514 "data_offset": 256, 00:20:28.514 "data_size": 7936 00:20:28.514 } 00:20:28.514 ] 00:20:28.514 }' 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.514 [2024-11-15 11:32:11.422411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.514 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.772 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.772 "name": "raid_bdev1", 00:20:28.772 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:28.772 "strip_size_kb": 0, 00:20:28.772 "state": "online", 00:20:28.772 "raid_level": "raid1", 00:20:28.772 "superblock": true, 00:20:28.772 "num_base_bdevs": 2, 00:20:28.772 "num_base_bdevs_discovered": 1, 00:20:28.772 "num_base_bdevs_operational": 1, 00:20:28.772 "base_bdevs_list": [ 00:20:28.772 { 00:20:28.772 "name": null, 00:20:28.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.773 "is_configured": false, 00:20:28.773 "data_offset": 0, 00:20:28.773 "data_size": 7936 00:20:28.773 }, 00:20:28.773 { 00:20:28.773 "name": "BaseBdev2", 00:20:28.773 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:28.773 "is_configured": true, 00:20:28.773 "data_offset": 256, 00:20:28.773 "data_size": 7936 00:20:28.773 } 00:20:28.773 ] 00:20:28.773 }' 00:20:28.773 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.773 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.340 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.340 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.340 11:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.340 [2024-11-15 11:32:11.994637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.340 [2024-11-15 11:32:11.995257] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:29.340 [2024-11-15 11:32:11.995293] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:29.340 [2024-11-15 11:32:11.995361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.340 [2024-11-15 11:32:12.013021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:29.340 11:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.340 11:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:29.340 [2024-11-15 11:32:12.016100] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.275 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.275 "name": "raid_bdev1", 00:20:30.275 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:30.275 "strip_size_kb": 0, 00:20:30.275 "state": "online", 00:20:30.275 "raid_level": "raid1", 00:20:30.275 "superblock": true, 00:20:30.275 "num_base_bdevs": 2, 00:20:30.275 "num_base_bdevs_discovered": 2, 00:20:30.275 "num_base_bdevs_operational": 2, 00:20:30.275 "process": { 00:20:30.275 "type": "rebuild", 00:20:30.275 "target": "spare", 00:20:30.275 "progress": { 00:20:30.275 "blocks": 2560, 00:20:30.275 "percent": 32 00:20:30.275 } 00:20:30.275 }, 00:20:30.275 "base_bdevs_list": [ 00:20:30.275 { 00:20:30.275 "name": "spare", 00:20:30.275 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:30.275 "is_configured": true, 00:20:30.275 "data_offset": 256, 00:20:30.275 "data_size": 7936 00:20:30.275 }, 00:20:30.275 { 00:20:30.275 "name": "BaseBdev2", 00:20:30.275 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:30.275 "is_configured": true, 00:20:30.275 "data_offset": 256, 00:20:30.275 "data_size": 7936 00:20:30.275 } 00:20:30.276 ] 00:20:30.276 }' 00:20:30.276 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.276 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.276 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.276 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.276 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:30.276 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.276 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.276 [2024-11-15 11:32:13.197915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.535 [2024-11-15 11:32:13.227447] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.535 [2024-11-15 11:32:13.227547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.535 [2024-11-15 11:32:13.227572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.535 [2024-11-15 11:32:13.227587] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.535 "name": "raid_bdev1", 00:20:30.535 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:30.535 "strip_size_kb": 0, 00:20:30.535 "state": "online", 00:20:30.535 "raid_level": "raid1", 00:20:30.535 "superblock": true, 00:20:30.535 "num_base_bdevs": 2, 00:20:30.535 "num_base_bdevs_discovered": 1, 00:20:30.535 "num_base_bdevs_operational": 1, 00:20:30.535 "base_bdevs_list": [ 00:20:30.535 { 00:20:30.535 "name": null, 00:20:30.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.535 "is_configured": false, 00:20:30.535 "data_offset": 0, 00:20:30.535 "data_size": 7936 00:20:30.535 }, 00:20:30.535 { 00:20:30.535 "name": "BaseBdev2", 00:20:30.535 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:30.535 "is_configured": true, 00:20:30.535 "data_offset": 256, 00:20:30.535 "data_size": 7936 00:20:30.535 } 00:20:30.535 ] 00:20:30.535 }' 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.535 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.102 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:31.102 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.102 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.102 [2024-11-15 11:32:13.825999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:31.102 [2024-11-15 11:32:13.826108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.102 [2024-11-15 11:32:13.826150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:31.102 [2024-11-15 11:32:13.826170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.102 [2024-11-15 11:32:13.826618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.102 [2024-11-15 11:32:13.826655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:31.102 [2024-11-15 11:32:13.826738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:31.102 [2024-11-15 11:32:13.826769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:31.102 [2024-11-15 11:32:13.826794] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:31.102 [2024-11-15 11:32:13.826851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:31.102 [2024-11-15 11:32:13.844526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:31.102 spare 00:20:31.102 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.102 11:32:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:31.102 [2024-11-15 11:32:13.847498] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.037 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.038 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.038 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.038 "name": "raid_bdev1", 00:20:32.038 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:32.038 "strip_size_kb": 0, 00:20:32.038 "state": "online", 00:20:32.038 "raid_level": "raid1", 00:20:32.038 "superblock": true, 00:20:32.038 "num_base_bdevs": 2, 00:20:32.038 "num_base_bdevs_discovered": 2, 00:20:32.038 "num_base_bdevs_operational": 2, 00:20:32.038 "process": { 00:20:32.038 "type": "rebuild", 00:20:32.038 "target": "spare", 00:20:32.038 "progress": { 00:20:32.038 "blocks": 2560, 00:20:32.038 "percent": 32 00:20:32.038 } 00:20:32.038 }, 00:20:32.038 "base_bdevs_list": [ 00:20:32.038 { 00:20:32.038 "name": "spare", 00:20:32.038 "uuid": "b74bde3c-73b9-516f-a3dc-a8f021ed97c2", 00:20:32.038 "is_configured": true, 00:20:32.038 "data_offset": 256, 00:20:32.038 "data_size": 7936 00:20:32.038 }, 00:20:32.038 { 00:20:32.038 "name": "BaseBdev2", 00:20:32.038 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:32.038 "is_configured": true, 00:20:32.038 "data_offset": 256, 00:20:32.038 "data_size": 7936 00:20:32.038 } 00:20:32.038 ] 00:20:32.038 }' 00:20:32.038 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.038 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.038 11:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.296 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.296 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:32.296 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.296 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.296 [2024-11-15 11:32:15.029000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.297 [2024-11-15 11:32:15.058659] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:32.297 [2024-11-15 11:32:15.058999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.297 [2024-11-15 11:32:15.059047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.297 [2024-11-15 11:32:15.059071] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.297 "name": "raid_bdev1", 00:20:32.297 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:32.297 "strip_size_kb": 0, 00:20:32.297 "state": "online", 00:20:32.297 "raid_level": "raid1", 00:20:32.297 "superblock": true, 00:20:32.297 "num_base_bdevs": 2, 00:20:32.297 "num_base_bdevs_discovered": 1, 00:20:32.297 "num_base_bdevs_operational": 1, 00:20:32.297 "base_bdevs_list": [ 00:20:32.297 { 00:20:32.297 "name": null, 00:20:32.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.297 "is_configured": false, 00:20:32.297 "data_offset": 0, 00:20:32.297 "data_size": 7936 00:20:32.297 }, 00:20:32.297 { 00:20:32.297 "name": "BaseBdev2", 00:20:32.297 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:32.297 "is_configured": true, 00:20:32.297 "data_offset": 256, 00:20:32.297 "data_size": 7936 00:20:32.297 } 00:20:32.297 ] 00:20:32.297 }' 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.297 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.864 "name": "raid_bdev1", 00:20:32.864 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:32.864 "strip_size_kb": 0, 00:20:32.864 "state": "online", 00:20:32.864 "raid_level": "raid1", 00:20:32.864 "superblock": true, 00:20:32.864 "num_base_bdevs": 2, 00:20:32.864 "num_base_bdevs_discovered": 1, 00:20:32.864 "num_base_bdevs_operational": 1, 00:20:32.864 "base_bdevs_list": [ 00:20:32.864 { 00:20:32.864 "name": null, 00:20:32.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.864 "is_configured": false, 00:20:32.864 "data_offset": 0, 00:20:32.864 "data_size": 7936 00:20:32.864 }, 00:20:32.864 { 00:20:32.864 "name": "BaseBdev2", 00:20:32.864 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:32.864 "is_configured": true, 00:20:32.864 "data_offset": 256, 00:20:32.864 "data_size": 7936 00:20:32.864 } 00:20:32.864 ] 00:20:32.864 }' 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.864 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:33.123 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.123 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:33.123 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.123 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:33.123 [2024-11-15 11:32:15.822303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:33.123 [2024-11-15 11:32:15.822526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.123 [2024-11-15 11:32:15.822573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:33.123 [2024-11-15 11:32:15.822592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.123 [2024-11-15 11:32:15.822880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.123 [2024-11-15 11:32:15.822904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:33.123 [2024-11-15 11:32:15.822978] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:33.123 [2024-11-15 11:32:15.822999] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:33.123 [2024-11-15 11:32:15.823014] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:33.123 [2024-11-15 11:32:15.823028] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:33.123 BaseBdev1 00:20:33.123 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.123 11:32:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.058 "name": "raid_bdev1", 00:20:34.058 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:34.058 "strip_size_kb": 0, 00:20:34.058 "state": "online", 00:20:34.058 "raid_level": "raid1", 00:20:34.058 "superblock": true, 00:20:34.058 "num_base_bdevs": 2, 00:20:34.058 "num_base_bdevs_discovered": 1, 00:20:34.058 "num_base_bdevs_operational": 1, 00:20:34.058 "base_bdevs_list": [ 00:20:34.058 { 00:20:34.058 "name": null, 00:20:34.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.058 "is_configured": false, 00:20:34.058 "data_offset": 0, 00:20:34.058 "data_size": 7936 00:20:34.058 }, 00:20:34.058 { 00:20:34.058 "name": "BaseBdev2", 00:20:34.058 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:34.058 "is_configured": true, 00:20:34.058 "data_offset": 256, 00:20:34.058 "data_size": 7936 00:20:34.058 } 00:20:34.058 ] 00:20:34.058 }' 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.058 11:32:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.626 "name": "raid_bdev1", 00:20:34.626 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:34.626 "strip_size_kb": 0, 00:20:34.626 "state": "online", 00:20:34.626 "raid_level": "raid1", 00:20:34.626 "superblock": true, 00:20:34.626 "num_base_bdevs": 2, 00:20:34.626 "num_base_bdevs_discovered": 1, 00:20:34.626 "num_base_bdevs_operational": 1, 00:20:34.626 "base_bdevs_list": [ 00:20:34.626 { 00:20:34.626 "name": null, 00:20:34.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.626 "is_configured": false, 00:20:34.626 "data_offset": 0, 00:20:34.626 "data_size": 7936 00:20:34.626 }, 00:20:34.626 { 00:20:34.626 "name": "BaseBdev2", 00:20:34.626 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:34.626 "is_configured": true, 00:20:34.626 "data_offset": 256, 00:20:34.626 "data_size": 7936 00:20:34.626 } 00:20:34.626 ] 00:20:34.626 }' 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.626 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.886 [2024-11-15 11:32:17.579041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.886 [2024-11-15 11:32:17.579467] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:34.886 [2024-11-15 11:32:17.579508] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:34.886 request: 00:20:34.886 { 00:20:34.886 "base_bdev": "BaseBdev1", 00:20:34.886 "raid_bdev": "raid_bdev1", 00:20:34.886 "method": "bdev_raid_add_base_bdev", 00:20:34.886 "req_id": 1 00:20:34.886 } 00:20:34.886 Got JSON-RPC error response 00:20:34.886 response: 00:20:34.886 { 00:20:34.886 "code": -22, 00:20:34.886 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:34.886 } 00:20:34.886 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:34.886 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:34.886 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:34.886 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:34.886 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:34.886 11:32:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.822 "name": "raid_bdev1", 00:20:35.822 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:35.822 "strip_size_kb": 0, 00:20:35.822 "state": "online", 00:20:35.822 "raid_level": "raid1", 00:20:35.822 "superblock": true, 00:20:35.822 "num_base_bdevs": 2, 00:20:35.822 "num_base_bdevs_discovered": 1, 00:20:35.822 "num_base_bdevs_operational": 1, 00:20:35.822 "base_bdevs_list": [ 00:20:35.822 { 00:20:35.822 "name": null, 00:20:35.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.822 "is_configured": false, 00:20:35.822 "data_offset": 0, 00:20:35.822 "data_size": 7936 00:20:35.822 }, 00:20:35.822 { 00:20:35.822 "name": "BaseBdev2", 00:20:35.822 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:35.822 "is_configured": true, 00:20:35.822 "data_offset": 256, 00:20:35.822 "data_size": 7936 00:20:35.822 } 00:20:35.822 ] 00:20:35.822 }' 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.822 11:32:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.390 "name": "raid_bdev1", 00:20:36.390 "uuid": "4220033f-3db5-45eb-8933-b2afff96e53b", 00:20:36.390 "strip_size_kb": 0, 00:20:36.390 "state": "online", 00:20:36.390 "raid_level": "raid1", 00:20:36.390 "superblock": true, 00:20:36.390 "num_base_bdevs": 2, 00:20:36.390 "num_base_bdevs_discovered": 1, 00:20:36.390 "num_base_bdevs_operational": 1, 00:20:36.390 "base_bdevs_list": [ 00:20:36.390 { 00:20:36.390 "name": null, 00:20:36.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.390 "is_configured": false, 00:20:36.390 "data_offset": 0, 00:20:36.390 "data_size": 7936 00:20:36.390 }, 00:20:36.390 { 00:20:36.390 "name": "BaseBdev2", 00:20:36.390 "uuid": "ed3b6017-b699-5eb4-a05a-76b2005a794a", 00:20:36.390 "is_configured": true, 00:20:36.390 "data_offset": 256, 00:20:36.390 "data_size": 7936 00:20:36.390 } 00:20:36.390 ] 00:20:36.390 }' 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.390 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:36.391 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89385 00:20:36.391 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89385 ']' 00:20:36.391 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89385 00:20:36.391 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:36.391 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.391 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89385 00:20:36.650 killing process with pid 89385 00:20:36.650 Received shutdown signal, test time was about 60.000000 seconds 00:20:36.650 00:20:36.650 Latency(us) 00:20:36.650 [2024-11-15T11:32:19.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.650 [2024-11-15T11:32:19.600Z] =================================================================================================================== 00:20:36.650 [2024-11-15T11:32:19.600Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.650 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.650 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.650 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89385' 00:20:36.650 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89385 00:20:36.650 [2024-11-15 11:32:19.362254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:36.650 11:32:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89385 00:20:36.650 [2024-11-15 11:32:19.362445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.650 [2024-11-15 11:32:19.362517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.650 [2024-11-15 11:32:19.362545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:36.909 [2024-11-15 11:32:19.645612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.844 11:32:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:37.844 00:20:37.844 real 0m19.182s 00:20:37.844 user 0m26.223s 00:20:37.844 sys 0m1.587s 00:20:37.844 11:32:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:37.844 ************************************ 00:20:37.844 END TEST raid_rebuild_test_sb_md_interleaved 00:20:37.844 ************************************ 00:20:37.844 11:32:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.103 11:32:20 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:38.103 11:32:20 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:38.103 11:32:20 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89385 ']' 00:20:38.103 11:32:20 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89385 00:20:38.104 11:32:20 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:38.104 00:20:38.104 real 13m6.972s 00:20:38.104 user 18m24.225s 00:20:38.104 sys 1m53.393s 00:20:38.104 11:32:20 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:38.104 11:32:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.104 ************************************ 00:20:38.104 END TEST bdev_raid 00:20:38.104 ************************************ 00:20:38.104 11:32:20 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:38.104 11:32:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:38.104 11:32:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:38.104 11:32:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.104 ************************************ 00:20:38.104 START TEST spdkcli_raid 00:20:38.104 ************************************ 00:20:38.104 11:32:20 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:38.104 * Looking for test storage... 00:20:38.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:38.104 11:32:20 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:38.104 11:32:20 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:20:38.104 11:32:20 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:38.363 11:32:21 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.363 11:32:21 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:38.363 11:32:21 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.363 11:32:21 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.363 --rc genhtml_branch_coverage=1 00:20:38.363 --rc genhtml_function_coverage=1 00:20:38.363 --rc genhtml_legend=1 00:20:38.363 --rc geninfo_all_blocks=1 00:20:38.363 --rc geninfo_unexecuted_blocks=1 00:20:38.363 00:20:38.363 ' 00:20:38.363 11:32:21 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.363 --rc genhtml_branch_coverage=1 00:20:38.363 --rc genhtml_function_coverage=1 00:20:38.363 --rc genhtml_legend=1 00:20:38.363 --rc geninfo_all_blocks=1 00:20:38.363 --rc geninfo_unexecuted_blocks=1 00:20:38.363 00:20:38.363 ' 00:20:38.363 11:32:21 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.363 --rc genhtml_branch_coverage=1 00:20:38.363 --rc genhtml_function_coverage=1 00:20:38.363 --rc genhtml_legend=1 00:20:38.363 --rc geninfo_all_blocks=1 00:20:38.364 --rc geninfo_unexecuted_blocks=1 00:20:38.364 00:20:38.364 ' 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:38.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.364 --rc genhtml_branch_coverage=1 00:20:38.364 --rc genhtml_function_coverage=1 00:20:38.364 --rc genhtml_legend=1 00:20:38.364 --rc geninfo_all_blocks=1 00:20:38.364 --rc geninfo_unexecuted_blocks=1 00:20:38.364 00:20:38.364 ' 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:38.364 11:32:21 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90077 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90077 00:20:38.364 11:32:21 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90077 ']' 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:38.364 11:32:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.364 [2024-11-15 11:32:21.275822] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:38.364 [2024-11-15 11:32:21.276054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90077 ] 00:20:38.623 [2024-11-15 11:32:21.468398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:38.882 [2024-11-15 11:32:21.612164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.882 [2024-11-15 11:32:21.612200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.819 11:32:22 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:39.819 11:32:22 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:20:39.819 11:32:22 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:39.819 11:32:22 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.819 11:32:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.819 11:32:22 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:39.819 11:32:22 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.819 11:32:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.819 11:32:22 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:39.819 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:39.819 ' 00:20:41.721 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:41.721 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:41.721 11:32:24 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:41.721 11:32:24 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:41.721 11:32:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:41.721 11:32:24 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:41.721 11:32:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:41.721 11:32:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:41.721 11:32:24 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:41.721 ' 00:20:42.656 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:42.656 11:32:25 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:42.656 11:32:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:42.656 11:32:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.914 11:32:25 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:42.914 11:32:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:42.914 11:32:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.914 11:32:25 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:42.914 11:32:25 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:43.480 11:32:26 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:43.480 11:32:26 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:43.480 11:32:26 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:43.480 11:32:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.480 11:32:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.480 11:32:26 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:43.480 11:32:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.480 11:32:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.480 11:32:26 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:43.480 ' 00:20:44.415 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:44.672 11:32:27 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:44.673 11:32:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:44.673 11:32:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.673 11:32:27 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:44.673 11:32:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.673 11:32:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.673 11:32:27 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:44.673 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:44.673 ' 00:20:46.050 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:46.050 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:46.308 11:32:29 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:46.308 11:32:29 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90077 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90077 ']' 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90077 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90077 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:46.308 11:32:29 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:46.309 11:32:29 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90077' 00:20:46.309 killing process with pid 90077 00:20:46.309 11:32:29 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90077 00:20:46.309 11:32:29 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90077 00:20:48.842 11:32:31 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:48.842 Process with pid 90077 is not found 00:20:48.842 11:32:31 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90077 ']' 00:20:48.842 11:32:31 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90077 00:20:48.842 11:32:31 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90077 ']' 00:20:48.842 11:32:31 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90077 00:20:48.842 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90077) - No such process 00:20:48.842 11:32:31 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90077 is not found' 00:20:48.842 11:32:31 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:48.842 11:32:31 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:48.842 11:32:31 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:48.842 11:32:31 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:48.842 00:20:48.842 real 0m10.540s 00:20:48.842 user 0m21.702s 00:20:48.842 sys 0m1.335s 00:20:48.842 11:32:31 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:48.842 11:32:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:48.842 ************************************ 00:20:48.842 END TEST spdkcli_raid 00:20:48.842 ************************************ 00:20:48.842 11:32:31 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:48.842 11:32:31 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:48.842 11:32:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:48.842 11:32:31 -- common/autotest_common.sh@10 -- # set +x 00:20:48.842 ************************************ 00:20:48.842 START TEST blockdev_raid5f 00:20:48.842 ************************************ 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:48.842 * Looking for test storage... 00:20:48.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.842 11:32:31 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.842 --rc genhtml_branch_coverage=1 00:20:48.842 --rc genhtml_function_coverage=1 00:20:48.842 --rc genhtml_legend=1 00:20:48.842 --rc geninfo_all_blocks=1 00:20:48.842 --rc geninfo_unexecuted_blocks=1 00:20:48.842 00:20:48.842 ' 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.842 --rc genhtml_branch_coverage=1 00:20:48.842 --rc genhtml_function_coverage=1 00:20:48.842 --rc genhtml_legend=1 00:20:48.842 --rc geninfo_all_blocks=1 00:20:48.842 --rc geninfo_unexecuted_blocks=1 00:20:48.842 00:20:48.842 ' 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.842 --rc genhtml_branch_coverage=1 00:20:48.842 --rc genhtml_function_coverage=1 00:20:48.842 --rc genhtml_legend=1 00:20:48.842 --rc geninfo_all_blocks=1 00:20:48.842 --rc geninfo_unexecuted_blocks=1 00:20:48.842 00:20:48.842 ' 00:20:48.842 11:32:31 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.842 --rc genhtml_branch_coverage=1 00:20:48.842 --rc genhtml_function_coverage=1 00:20:48.842 --rc genhtml_legend=1 00:20:48.842 --rc geninfo_all_blocks=1 00:20:48.842 --rc geninfo_unexecuted_blocks=1 00:20:48.842 00:20:48.842 ' 00:20:48.842 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:48.842 11:32:31 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:48.842 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90353 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:48.843 11:32:31 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90353 00:20:48.843 11:32:31 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90353 ']' 00:20:48.843 11:32:31 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.843 11:32:31 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:48.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.843 11:32:31 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.843 11:32:31 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:48.843 11:32:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:49.102 [2024-11-15 11:32:31.813053] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:49.102 [2024-11-15 11:32:31.813317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90353 ] 00:20:49.102 [2024-11-15 11:32:32.000829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.361 [2024-11-15 11:32:32.146478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.297 11:32:33 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:50.297 11:32:33 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:20:50.297 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:50.297 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:20:50.297 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:50.297 11:32:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.297 11:32:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:50.297 Malloc0 00:20:50.297 Malloc1 00:20:50.297 Malloc2 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3953b1a0-f7c3-4482-8cec-3f5a2ef339f0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3953b1a0-f7c3-4482-8cec-3f5a2ef339f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3953b1a0-f7c3-4482-8cec-3f5a2ef339f0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "84cf6b5b-94e4-46d1-ac3d-00ed145b3bbb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3b26e956-7e76-495e-88be-b57968589a85",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "71f06322-8622-4681-9a99-f2d4d2a7684e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:50.557 11:32:33 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90353 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90353 ']' 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90353 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90353 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:50.557 killing process with pid 90353 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90353' 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90353 00:20:50.557 11:32:33 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90353 00:20:53.089 11:32:36 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:53.089 11:32:36 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:53.089 11:32:36 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:53.089 11:32:36 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:53.089 11:32:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:53.089 ************************************ 00:20:53.089 START TEST bdev_hello_world 00:20:53.089 ************************************ 00:20:53.089 11:32:36 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:53.348 [2024-11-15 11:32:36.109279] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:53.348 [2024-11-15 11:32:36.109468] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90426 ] 00:20:53.348 [2024-11-15 11:32:36.288086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.607 [2024-11-15 11:32:36.431806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.175 [2024-11-15 11:32:37.016306] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:54.175 [2024-11-15 11:32:37.016396] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:54.175 [2024-11-15 11:32:37.016438] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:54.175 [2024-11-15 11:32:37.017048] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:54.175 [2024-11-15 11:32:37.017352] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:54.175 [2024-11-15 11:32:37.017398] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:54.175 [2024-11-15 11:32:37.017473] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:54.175 00:20:54.175 [2024-11-15 11:32:37.017503] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:55.552 00:20:55.552 real 0m2.312s 00:20:55.552 user 0m1.807s 00:20:55.552 sys 0m0.377s 00:20:55.552 11:32:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.552 ************************************ 00:20:55.552 END TEST bdev_hello_world 00:20:55.552 ************************************ 00:20:55.552 11:32:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:55.552 11:32:38 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:55.552 11:32:38 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:55.552 11:32:38 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.552 11:32:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.552 ************************************ 00:20:55.552 START TEST bdev_bounds 00:20:55.552 ************************************ 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90469 00:20:55.552 Process bdevio pid: 90469 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90469' 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90469 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90469 ']' 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:55.552 11:32:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:55.552 [2024-11-15 11:32:38.482525] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:55.552 [2024-11-15 11:32:38.482739] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90469 ] 00:20:55.810 [2024-11-15 11:32:38.656960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:56.069 [2024-11-15 11:32:38.807109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.069 [2024-11-15 11:32:38.807386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.069 [2024-11-15 11:32:38.807402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.637 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:56.637 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:20:56.637 11:32:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:56.895 I/O targets: 00:20:56.895 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:56.895 00:20:56.895 00:20:56.895 CUnit - A unit testing framework for C - Version 2.1-3 00:20:56.896 http://cunit.sourceforge.net/ 00:20:56.896 00:20:56.896 00:20:56.896 Suite: bdevio tests on: raid5f 00:20:56.896 Test: blockdev write read block ...passed 00:20:56.896 Test: blockdev write zeroes read block ...passed 00:20:56.896 Test: blockdev write zeroes read no split ...passed 00:20:56.896 Test: blockdev write zeroes read split ...passed 00:20:57.155 Test: blockdev write zeroes read split partial ...passed 00:20:57.155 Test: blockdev reset ...passed 00:20:57.155 Test: blockdev write read 8 blocks ...passed 00:20:57.155 Test: blockdev write read size > 128k ...passed 00:20:57.155 Test: blockdev write read invalid size ...passed 00:20:57.155 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:57.155 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:57.155 Test: blockdev write read max offset ...passed 00:20:57.155 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:57.155 Test: blockdev writev readv 8 blocks ...passed 00:20:57.155 Test: blockdev writev readv 30 x 1block ...passed 00:20:57.155 Test: blockdev writev readv block ...passed 00:20:57.155 Test: blockdev writev readv size > 128k ...passed 00:20:57.155 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:57.155 Test: blockdev comparev and writev ...passed 00:20:57.155 Test: blockdev nvme passthru rw ...passed 00:20:57.155 Test: blockdev nvme passthru vendor specific ...passed 00:20:57.155 Test: blockdev nvme admin passthru ...passed 00:20:57.155 Test: blockdev copy ...passed 00:20:57.155 00:20:57.155 Run Summary: Type Total Ran Passed Failed Inactive 00:20:57.155 suites 1 1 n/a 0 0 00:20:57.155 tests 23 23 23 0 0 00:20:57.155 asserts 130 130 130 0 n/a 00:20:57.155 00:20:57.155 Elapsed time = 0.573 seconds 00:20:57.155 0 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90469 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90469 ']' 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90469 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90469 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:57.155 killing process with pid 90469 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90469' 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90469 00:20:57.155 11:32:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90469 00:20:58.566 11:32:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:58.566 00:20:58.566 real 0m2.974s 00:20:58.566 user 0m7.388s 00:20:58.566 sys 0m0.506s 00:20:58.566 11:32:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:58.566 11:32:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:58.566 ************************************ 00:20:58.566 END TEST bdev_bounds 00:20:58.566 ************************************ 00:20:58.566 11:32:41 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:58.566 11:32:41 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:58.566 11:32:41 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:58.566 11:32:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:58.566 ************************************ 00:20:58.566 START TEST bdev_nbd 00:20:58.566 ************************************ 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:58.566 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90529 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90529 /var/tmp/spdk-nbd.sock 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90529 ']' 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:58.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:58.567 11:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:58.826 [2024-11-15 11:32:41.556469] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:58.826 [2024-11-15 11:32:41.556748] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.826 [2024-11-15 11:32:41.738251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.085 [2024-11-15 11:32:41.881999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:59.653 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:59.912 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:59.912 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:59.912 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:59.912 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.913 1+0 records in 00:20:59.913 1+0 records out 00:20:59.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296449 s, 13.8 MB/s 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:59.913 11:32:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:00.172 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:00.172 { 00:21:00.172 "nbd_device": "/dev/nbd0", 00:21:00.172 "bdev_name": "raid5f" 00:21:00.172 } 00:21:00.172 ]' 00:21:00.172 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:00.172 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:00.172 { 00:21:00.172 "nbd_device": "/dev/nbd0", 00:21:00.172 "bdev_name": "raid5f" 00:21:00.172 } 00:21:00.172 ]' 00:21:00.172 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:00.431 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:00.431 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:00.431 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:00.431 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:00.431 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:00.431 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.431 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:00.689 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:00.689 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:00.689 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:00.689 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.689 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.690 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:00.690 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:00.690 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.690 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:00.690 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:00.690 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:00.949 11:32:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:01.208 /dev/nbd0 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.208 1+0 records in 00:21:01.208 1+0 records out 00:21:01.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423271 s, 9.7 MB/s 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:01.208 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:01.467 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:01.467 { 00:21:01.467 "nbd_device": "/dev/nbd0", 00:21:01.467 "bdev_name": "raid5f" 00:21:01.467 } 00:21:01.467 ]' 00:21:01.467 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:01.467 { 00:21:01.467 "nbd_device": "/dev/nbd0", 00:21:01.467 "bdev_name": "raid5f" 00:21:01.467 } 00:21:01.467 ]' 00:21:01.467 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:01.727 256+0 records in 00:21:01.727 256+0 records out 00:21:01.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00835502 s, 126 MB/s 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:01.727 256+0 records in 00:21:01.727 256+0 records out 00:21:01.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381769 s, 27.5 MB/s 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:01.727 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:01.985 11:32:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:02.243 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:02.502 malloc_lvol_verify 00:21:02.502 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:02.762 ecb8302d-61ec-49b6-99f5-b7d3e0021cf7 00:21:02.762 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:03.021 b7a8a11a-0112-4c07-8e8e-3c6d1dd5db72 00:21:03.021 11:32:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:03.280 /dev/nbd0 00:21:03.280 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:03.280 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:03.280 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:03.280 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:03.280 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:03.280 mke2fs 1.47.0 (5-Feb-2023) 00:21:03.280 Discarding device blocks: 0/4096 done 00:21:03.280 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:03.280 00:21:03.280 Allocating group tables: 0/1 done 00:21:03.280 Writing inode tables: 0/1 done 00:21:03.538 Creating journal (1024 blocks): done 00:21:03.538 Writing superblocks and filesystem accounting information: 0/1 done 00:21:03.538 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:03.538 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90529 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90529 ']' 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90529 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90529 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:03.797 killing process with pid 90529 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90529' 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90529 00:21:03.797 11:32:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90529 00:21:05.177 11:32:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:05.177 00:21:05.177 real 0m6.508s 00:21:05.177 user 0m9.184s 00:21:05.177 sys 0m1.435s 00:21:05.177 11:32:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:05.177 11:32:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 ************************************ 00:21:05.177 END TEST bdev_nbd 00:21:05.177 ************************************ 00:21:05.177 11:32:47 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:21:05.177 11:32:47 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:21:05.177 11:32:47 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:21:05.177 11:32:47 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:21:05.177 11:32:47 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:05.177 11:32:47 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:05.177 11:32:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 ************************************ 00:21:05.177 START TEST bdev_fio 00:21:05.177 ************************************ 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:05.177 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:21:05.177 11:32:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 ************************************ 00:21:05.177 START TEST bdev_fio_rw_verify 00:21:05.177 ************************************ 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.177 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:05.436 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:05.436 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:05.436 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:21:05.436 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:05.436 11:32:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:05.436 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:05.436 fio-3.35 00:21:05.436 Starting 1 thread 00:21:17.684 00:21:17.684 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90741: Fri Nov 15 11:32:59 2024 00:21:17.684 read: IOPS=8447, BW=33.0MiB/s (34.6MB/s)(330MiB/10001msec) 00:21:17.684 slat (usec): min=22, max=250, avg=29.51, stdev= 8.30 00:21:17.684 clat (usec): min=13, max=630, avg=188.31, stdev=74.80 00:21:17.684 lat (usec): min=42, max=682, avg=217.82, stdev=76.56 00:21:17.684 clat percentiles (usec): 00:21:17.684 | 50.000th=[ 186], 99.000th=[ 351], 99.900th=[ 519], 99.990th=[ 603], 00:21:17.684 | 99.999th=[ 635] 00:21:17.684 write: IOPS=8919, BW=34.8MiB/s (36.5MB/s)(344MiB/9865msec); 0 zone resets 00:21:17.684 slat (usec): min=11, max=232, avg=23.47, stdev= 8.16 00:21:17.684 clat (usec): min=71, max=1265, avg=429.81, stdev=70.45 00:21:17.684 lat (usec): min=90, max=1497, avg=453.28, stdev=72.82 00:21:17.684 clat percentiles (usec): 00:21:17.684 | 50.000th=[ 429], 99.000th=[ 594], 99.900th=[ 881], 99.990th=[ 1029], 00:21:17.684 | 99.999th=[ 1270] 00:21:17.684 bw ( KiB/s): min=30376, max=39984, per=98.52%, avg=35150.32, stdev=2475.09, samples=19 00:21:17.684 iops : min= 7594, max= 9996, avg=8787.58, stdev=618.77, samples=19 00:21:17.684 lat (usec) : 20=0.01%, 50=0.01%, 100=7.08%, 250=30.64%, 500=54.97% 00:21:17.684 lat (usec) : 750=7.14%, 1000=0.16% 00:21:17.684 lat (msec) : 2=0.01% 00:21:17.684 cpu : usr=98.36%, sys=0.66%, ctx=21, majf=0, minf=7377 00:21:17.684 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.684 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.684 issued rwts: total=84487,87990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.684 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:17.684 00:21:17.684 Run status group 0 (all jobs): 00:21:17.684 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=330MiB (346MB), run=10001-10001msec 00:21:17.684 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=344MiB (360MB), run=9865-9865msec 00:21:18.251 ----------------------------------------------------- 00:21:18.251 Suppressions used: 00:21:18.251 count bytes template 00:21:18.251 1 7 /usr/src/fio/parse.c 00:21:18.251 878 84288 /usr/src/fio/iolog.c 00:21:18.251 1 8 libtcmalloc_minimal.so 00:21:18.251 1 904 libcrypto.so 00:21:18.251 ----------------------------------------------------- 00:21:18.251 00:21:18.251 00:21:18.251 real 0m13.017s 00:21:18.251 user 0m13.311s 00:21:18.251 sys 0m0.869s 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:18.251 ************************************ 00:21:18.251 END TEST bdev_fio_rw_verify 00:21:18.251 ************************************ 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:21:18.251 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3953b1a0-f7c3-4482-8cec-3f5a2ef339f0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3953b1a0-f7c3-4482-8cec-3f5a2ef339f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3953b1a0-f7c3-4482-8cec-3f5a2ef339f0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "84cf6b5b-94e4-46d1-ac3d-00ed145b3bbb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3b26e956-7e76-495e-88be-b57968589a85",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "71f06322-8622-4681-9a99-f2d4d2a7684e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:18.252 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:18.511 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:18.511 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:18.511 /home/vagrant/spdk_repo/spdk 00:21:18.511 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:18.511 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:18.511 11:33:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:18.511 00:21:18.511 real 0m13.260s 00:21:18.511 user 0m13.422s 00:21:18.511 sys 0m0.959s 00:21:18.511 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:18.511 11:33:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:18.511 ************************************ 00:21:18.511 END TEST bdev_fio 00:21:18.511 ************************************ 00:21:18.511 11:33:01 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:18.511 11:33:01 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:18.511 11:33:01 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:21:18.511 11:33:01 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:18.511 11:33:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:18.511 ************************************ 00:21:18.511 START TEST bdev_verify 00:21:18.511 ************************************ 00:21:18.511 11:33:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:18.511 [2024-11-15 11:33:01.389935] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:21:18.511 [2024-11-15 11:33:01.390168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90895 ] 00:21:18.770 [2024-11-15 11:33:01.573163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:19.029 [2024-11-15 11:33:01.728826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.029 [2024-11-15 11:33:01.728851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.595 Running I/O for 5 seconds... 00:21:21.532 10392.00 IOPS, 40.59 MiB/s [2024-11-15T11:33:05.417Z] 11806.50 IOPS, 46.12 MiB/s [2024-11-15T11:33:06.793Z] 11939.00 IOPS, 46.64 MiB/s [2024-11-15T11:33:07.730Z] 11692.50 IOPS, 45.67 MiB/s [2024-11-15T11:33:07.730Z] 11999.80 IOPS, 46.87 MiB/s 00:21:24.780 Latency(us) 00:21:24.780 [2024-11-15T11:33:07.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.780 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:24.780 Verification LBA range: start 0x0 length 0x2000 00:21:24.780 raid5f : 5.01 6019.60 23.51 0.00 0.00 32088.42 284.86 27405.96 00:21:24.780 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.780 Verification LBA range: start 0x2000 length 0x2000 00:21:24.780 raid5f : 5.02 5987.98 23.39 0.00 0.00 32109.92 112.17 27644.28 00:21:24.780 [2024-11-15T11:33:07.730Z] =================================================================================================================== 00:21:24.780 [2024-11-15T11:33:07.730Z] Total : 12007.59 46.90 0.00 0.00 32099.15 112.17 27644.28 00:21:26.156 00:21:26.156 real 0m7.424s 00:21:26.156 user 0m13.504s 00:21:26.156 sys 0m0.400s 00:21:26.156 11:33:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:26.156 ************************************ 00:21:26.156 11:33:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:26.156 END TEST bdev_verify 00:21:26.156 ************************************ 00:21:26.156 11:33:08 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:26.156 11:33:08 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:21:26.156 11:33:08 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:26.156 11:33:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:26.156 ************************************ 00:21:26.156 START TEST bdev_verify_big_io 00:21:26.156 ************************************ 00:21:26.156 11:33:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:26.156 [2024-11-15 11:33:08.861994] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:21:26.156 [2024-11-15 11:33:08.862892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90995 ] 00:21:26.156 [2024-11-15 11:33:09.040181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:26.415 [2024-11-15 11:33:09.179601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.415 [2024-11-15 11:33:09.179604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.984 Running I/O for 5 seconds... 00:21:28.928 568.00 IOPS, 35.50 MiB/s [2024-11-15T11:33:13.259Z] 634.00 IOPS, 39.62 MiB/s [2024-11-15T11:33:14.194Z] 676.67 IOPS, 42.29 MiB/s [2024-11-15T11:33:15.130Z] 698.00 IOPS, 43.62 MiB/s [2024-11-15T11:33:15.130Z] 710.80 IOPS, 44.42 MiB/s 00:21:32.180 Latency(us) 00:21:32.180 [2024-11-15T11:33:15.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.180 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:32.180 Verification LBA range: start 0x0 length 0x200 00:21:32.180 raid5f : 5.23 364.51 22.78 0.00 0.00 8750696.15 242.04 364141.85 00:21:32.180 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:32.180 Verification LBA range: start 0x200 length 0x200 00:21:32.180 raid5f : 5.25 362.86 22.68 0.00 0.00 8813659.21 203.87 367954.85 00:21:32.180 [2024-11-15T11:33:15.130Z] =================================================================================================================== 00:21:32.180 [2024-11-15T11:33:15.130Z] Total : 727.36 45.46 0.00 0.00 8782177.68 203.87 367954.85 00:21:33.557 00:21:33.557 real 0m7.678s 00:21:33.557 user 0m14.044s 00:21:33.557 sys 0m0.381s 00:21:33.557 11:33:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:33.557 11:33:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:33.557 ************************************ 00:21:33.557 END TEST bdev_verify_big_io 00:21:33.557 ************************************ 00:21:33.557 11:33:16 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:33.557 11:33:16 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:33.557 11:33:16 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:33.557 11:33:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:33.557 ************************************ 00:21:33.557 START TEST bdev_write_zeroes 00:21:33.557 ************************************ 00:21:33.557 11:33:16 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:33.816 [2024-11-15 11:33:16.610299] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:21:33.816 [2024-11-15 11:33:16.610520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91093 ] 00:21:34.075 [2024-11-15 11:33:16.801353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.075 [2024-11-15 11:33:16.940481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.641 Running I/O for 1 seconds... 00:21:36.016 19695.00 IOPS, 76.93 MiB/s 00:21:36.016 Latency(us) 00:21:36.016 [2024-11-15T11:33:18.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.016 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:36.016 raid5f : 1.01 19675.00 76.86 0.00 0.00 6479.10 2174.60 8757.99 00:21:36.016 [2024-11-15T11:33:18.966Z] =================================================================================================================== 00:21:36.016 [2024-11-15T11:33:18.966Z] Total : 19675.00 76.86 0.00 0.00 6479.10 2174.60 8757.99 00:21:37.404 00:21:37.404 real 0m3.513s 00:21:37.404 user 0m2.985s 00:21:37.404 sys 0m0.390s 00:21:37.404 11:33:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:37.404 ************************************ 00:21:37.404 END TEST bdev_write_zeroes 00:21:37.404 ************************************ 00:21:37.404 11:33:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 11:33:20 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:37.404 11:33:20 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:37.404 11:33:20 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:37.404 11:33:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 ************************************ 00:21:37.404 START TEST bdev_json_nonenclosed 00:21:37.404 ************************************ 00:21:37.404 11:33:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:37.404 [2024-11-15 11:33:20.195350] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:21:37.404 [2024-11-15 11:33:20.195596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91151 ] 00:21:37.676 [2024-11-15 11:33:20.385598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.676 [2024-11-15 11:33:20.534221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.676 [2024-11-15 11:33:20.534377] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:37.676 [2024-11-15 11:33:20.534421] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:37.676 [2024-11-15 11:33:20.534436] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:37.936 00:21:37.936 real 0m0.753s 00:21:37.936 user 0m0.479s 00:21:37.936 sys 0m0.167s 00:21:37.936 11:33:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:37.936 11:33:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:37.936 ************************************ 00:21:37.936 END TEST bdev_json_nonenclosed 00:21:37.936 ************************************ 00:21:37.936 11:33:20 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:37.936 11:33:20 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:37.936 11:33:20 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:37.936 11:33:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:37.936 ************************************ 00:21:37.936 START TEST bdev_json_nonarray 00:21:37.936 ************************************ 00:21:37.936 11:33:20 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:38.195 [2024-11-15 11:33:20.990853] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:21:38.195 [2024-11-15 11:33:20.991070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91174 ] 00:21:38.454 [2024-11-15 11:33:21.177183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.454 [2024-11-15 11:33:21.324224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.454 [2024-11-15 11:33:21.324421] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:38.454 [2024-11-15 11:33:21.324452] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:38.454 [2024-11-15 11:33:21.324481] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:38.712 00:21:38.712 real 0m0.737s 00:21:38.712 user 0m0.475s 00:21:38.713 sys 0m0.155s 00:21:38.713 11:33:21 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:38.713 11:33:21 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:38.713 ************************************ 00:21:38.713 END TEST bdev_json_nonarray 00:21:38.713 ************************************ 00:21:38.713 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:21:38.713 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:21:38.713 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:21:38.713 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:38.713 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:21:38.713 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:38.713 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:38.972 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:38.972 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:38.972 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:38.972 11:33:21 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:38.972 00:21:38.972 real 0m50.177s 00:21:38.972 user 1m7.775s 00:21:38.972 sys 0m5.908s 00:21:38.972 11:33:21 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:38.972 ************************************ 00:21:38.972 END TEST blockdev_raid5f 00:21:38.972 ************************************ 00:21:38.972 11:33:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:38.972 11:33:21 -- spdk/autotest.sh@194 -- # uname -s 00:21:38.972 11:33:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:38.972 11:33:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:38.972 11:33:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:38.972 11:33:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@256 -- # timing_exit lib 00:21:38.972 11:33:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:38.972 11:33:21 -- common/autotest_common.sh@10 -- # set +x 00:21:38.972 11:33:21 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:38.972 11:33:21 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:38.972 11:33:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:38.972 11:33:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:38.972 11:33:21 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:38.972 11:33:21 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:38.972 11:33:21 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:38.972 11:33:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.972 11:33:21 -- common/autotest_common.sh@10 -- # set +x 00:21:38.972 11:33:21 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:38.972 11:33:21 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:21:38.972 11:33:21 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:21:38.972 11:33:21 -- common/autotest_common.sh@10 -- # set +x 00:21:40.874 INFO: APP EXITING 00:21:40.874 INFO: killing all VMs 00:21:40.874 INFO: killing vhost app 00:21:40.874 INFO: EXIT DONE 00:21:40.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:40.874 Waiting for block devices as requested 00:21:40.874 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:41.133 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:41.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:41.961 Cleaning 00:21:41.961 Removing: /var/run/dpdk/spdk0/config 00:21:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:41.961 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:41.961 Removing: /dev/shm/spdk_tgt_trace.pid56725 00:21:41.961 Removing: /var/run/dpdk/spdk0 00:21:41.961 Removing: /var/run/dpdk/spdk_pid56501 00:21:41.961 Removing: /var/run/dpdk/spdk_pid56725 00:21:41.961 Removing: /var/run/dpdk/spdk_pid56954 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57058 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57114 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57248 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57266 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57476 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57581 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57688 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57810 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57912 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57952 00:21:41.961 Removing: /var/run/dpdk/spdk_pid57994 00:21:41.961 Removing: /var/run/dpdk/spdk_pid58059 00:21:41.961 Removing: /var/run/dpdk/spdk_pid58176 00:21:41.961 Removing: /var/run/dpdk/spdk_pid58644 00:21:41.961 Removing: /var/run/dpdk/spdk_pid58715 00:21:41.961 Removing: /var/run/dpdk/spdk_pid58789 00:21:41.961 Removing: /var/run/dpdk/spdk_pid58805 00:21:41.961 Removing: /var/run/dpdk/spdk_pid58955 00:21:41.961 Removing: /var/run/dpdk/spdk_pid58971 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59124 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59140 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59210 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59233 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59297 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59315 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59516 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59551 00:21:41.961 Removing: /var/run/dpdk/spdk_pid59639 00:21:41.961 Removing: /var/run/dpdk/spdk_pid61015 00:21:41.961 Removing: /var/run/dpdk/spdk_pid61232 00:21:41.961 Removing: /var/run/dpdk/spdk_pid61372 00:21:41.961 Removing: /var/run/dpdk/spdk_pid62032 00:21:41.961 Removing: /var/run/dpdk/spdk_pid62248 00:21:41.961 Removing: /var/run/dpdk/spdk_pid62389 00:21:41.961 Removing: /var/run/dpdk/spdk_pid63044 00:21:41.961 Removing: /var/run/dpdk/spdk_pid63379 00:21:41.961 Removing: /var/run/dpdk/spdk_pid63525 00:21:41.961 Removing: /var/run/dpdk/spdk_pid64943 00:21:41.961 Removing: /var/run/dpdk/spdk_pid65196 00:21:41.961 Removing: /var/run/dpdk/spdk_pid65342 00:21:41.961 Removing: /var/run/dpdk/spdk_pid66761 00:21:41.961 Removing: /var/run/dpdk/spdk_pid67020 00:21:41.961 Removing: /var/run/dpdk/spdk_pid67171 00:21:41.961 Removing: /var/run/dpdk/spdk_pid68589 00:21:41.961 Removing: /var/run/dpdk/spdk_pid69046 00:21:41.961 Removing: /var/run/dpdk/spdk_pid69190 00:21:41.961 Removing: /var/run/dpdk/spdk_pid70699 00:21:41.961 Removing: /var/run/dpdk/spdk_pid70969 00:21:41.961 Removing: /var/run/dpdk/spdk_pid71117 00:21:41.962 Removing: /var/run/dpdk/spdk_pid72635 00:21:41.962 Removing: /var/run/dpdk/spdk_pid72900 00:21:41.962 Removing: /var/run/dpdk/spdk_pid73051 00:21:41.962 Removing: /var/run/dpdk/spdk_pid74562 00:21:41.962 Removing: /var/run/dpdk/spdk_pid75059 00:21:41.962 Removing: /var/run/dpdk/spdk_pid75206 00:21:41.962 Removing: /var/run/dpdk/spdk_pid75354 00:21:41.962 Removing: /var/run/dpdk/spdk_pid75807 00:21:41.962 Removing: /var/run/dpdk/spdk_pid76570 00:21:41.962 Removing: /var/run/dpdk/spdk_pid76972 00:21:41.962 Removing: /var/run/dpdk/spdk_pid77672 00:21:42.221 Removing: /var/run/dpdk/spdk_pid78147 00:21:42.221 Removing: /var/run/dpdk/spdk_pid78940 00:21:42.221 Removing: /var/run/dpdk/spdk_pid79360 00:21:42.221 Removing: /var/run/dpdk/spdk_pid81365 00:21:42.221 Removing: /var/run/dpdk/spdk_pid81814 00:21:42.221 Removing: /var/run/dpdk/spdk_pid82267 00:21:42.221 Removing: /var/run/dpdk/spdk_pid84390 00:21:42.221 Removing: /var/run/dpdk/spdk_pid84881 00:21:42.221 Removing: /var/run/dpdk/spdk_pid85391 00:21:42.221 Removing: /var/run/dpdk/spdk_pid86471 00:21:42.221 Removing: /var/run/dpdk/spdk_pid86805 00:21:42.221 Removing: /var/run/dpdk/spdk_pid87765 00:21:42.221 Removing: /var/run/dpdk/spdk_pid88094 00:21:42.221 Removing: /var/run/dpdk/spdk_pid89050 00:21:42.221 Removing: /var/run/dpdk/spdk_pid89385 00:21:42.221 Removing: /var/run/dpdk/spdk_pid90077 00:21:42.221 Removing: /var/run/dpdk/spdk_pid90353 00:21:42.221 Removing: /var/run/dpdk/spdk_pid90426 00:21:42.221 Removing: /var/run/dpdk/spdk_pid90469 00:21:42.221 Removing: /var/run/dpdk/spdk_pid90727 00:21:42.221 Removing: /var/run/dpdk/spdk_pid90895 00:21:42.221 Removing: /var/run/dpdk/spdk_pid90995 00:21:42.221 Removing: /var/run/dpdk/spdk_pid91093 00:21:42.221 Removing: /var/run/dpdk/spdk_pid91151 00:21:42.221 Removing: /var/run/dpdk/spdk_pid91174 00:21:42.221 Clean 00:21:42.221 11:33:25 -- common/autotest_common.sh@1451 -- # return 0 00:21:42.221 11:33:25 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:42.221 11:33:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.221 11:33:25 -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 11:33:25 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:42.222 11:33:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.222 11:33:25 -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 11:33:25 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:42.222 11:33:25 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:42.222 11:33:25 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:42.222 11:33:25 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:42.222 11:33:25 -- spdk/autotest.sh@394 -- # hostname 00:21:42.222 11:33:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:42.481 geninfo: WARNING: invalid characters removed from testname! 00:22:09.125 11:33:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:10.059 11:33:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:13.339 11:33:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:15.870 11:33:58 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:18.401 11:34:01 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:20.932 11:34:03 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:23.463 11:34:06 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:23.463 11:34:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:23.463 11:34:06 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:23.463 11:34:06 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:23.463 11:34:06 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:23.463 11:34:06 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:23.721 + [[ -n 5205 ]] 00:22:23.721 + sudo kill 5205 00:22:23.731 [Pipeline] } 00:22:23.749 [Pipeline] // timeout 00:22:23.754 [Pipeline] } 00:22:23.770 [Pipeline] // stage 00:22:23.775 [Pipeline] } 00:22:23.789 [Pipeline] // catchError 00:22:23.798 [Pipeline] stage 00:22:23.800 [Pipeline] { (Stop VM) 00:22:23.811 [Pipeline] sh 00:22:24.088 + vagrant halt 00:22:27.375 ==> default: Halting domain... 00:22:32.691 [Pipeline] sh 00:22:32.971 + vagrant destroy -f 00:22:36.259 ==> default: Removing domain... 00:22:36.271 [Pipeline] sh 00:22:36.554 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:36.563 [Pipeline] } 00:22:36.573 [Pipeline] // stage 00:22:36.578 [Pipeline] } 00:22:36.586 [Pipeline] // dir 00:22:36.590 [Pipeline] } 00:22:36.599 [Pipeline] // wrap 00:22:36.604 [Pipeline] } 00:22:36.612 [Pipeline] // catchError 00:22:36.618 [Pipeline] stage 00:22:36.619 [Pipeline] { (Epilogue) 00:22:36.629 [Pipeline] sh 00:22:36.905 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:42.201 [Pipeline] catchError 00:22:42.203 [Pipeline] { 00:22:42.216 [Pipeline] sh 00:22:42.496 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:42.496 Artifacts sizes are good 00:22:42.505 [Pipeline] } 00:22:42.518 [Pipeline] // catchError 00:22:42.530 [Pipeline] archiveArtifacts 00:22:42.538 Archiving artifacts 00:22:42.646 [Pipeline] cleanWs 00:22:42.658 [WS-CLEANUP] Deleting project workspace... 00:22:42.658 [WS-CLEANUP] Deferred wipeout is used... 00:22:42.664 [WS-CLEANUP] done 00:22:42.666 [Pipeline] } 00:22:42.682 [Pipeline] // stage 00:22:42.689 [Pipeline] } 00:22:42.704 [Pipeline] // node 00:22:42.709 [Pipeline] End of Pipeline 00:22:42.759 Finished: SUCCESS